entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.03874v1 | 20230708013842 | The geometry of the Thurston metric: a survey | [
"Huiping Pan",
"Weixu Su"
] | math.GT | [
"math.GT",
"math.CV",
"math.DG",
"32G15, 30F45, 30F60"
] |
New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump
Jaehoon Yu
==============================================================================================
This chapter is a survey about the Thurston metric on the Teichmüller space.
The central issue is the constructions of extremal Lipschitz maps between hyperbolic surfaces. We review several constructions, including the original work of Thurston.
Coarse geometry and isometry rigidity of the Thurston metric, relation between the Thurston metric and the Thurston compactification are discussed.
Some recent generalizations and developments of the Thurston metric are sketched.
Mathematical classification (2010)
32G15; 30F45; 30F60.
plain
|
http://arxiv.org/abs/2307.06010v1 | 20230712084729 | Mean-field interacting multi-type birth-death processes with a view to applications in phylodynamics | [
"William S. DeWitt",
"Steven N. Evans",
"Ella Hiesmayr",
"Sebastian Hummel"
] | q-bio.PE | [
"q-bio.PE",
"math.PR"
] |
1]William S. DeWitt
[email protected]
2]Steven N. Evanscor1
[email protected]
2]Ella Hiesmayr
[email protected]
2]Sebastian Hummel
[email protected]
[1]organization=Department of Electrical Engineering & Computer Sciences, University of California, Berkeley
[2]organization=Department of Statistics, University of California, Berkeley
[cor1]Corresponding author
Multi-type birth-death processes underlie approaches for inferring evolutionary dynamics from phylogenetic trees across biological scales, ranging from deep-time species macroevolution to rapid viral evolution and somatic cellular proliferation.
A limitation of current phylogenetic birth-death models is that they require restrictive linearity assumptions that yield tractable likelihoods, but that also preclude interactions between individuals.
Many fundamental evolutionary processes—such as environmental carrying capacity or frequency-dependent selection—entail interactions, and may strongly influence the dynamics in some systems.
Here, we introduce a multi-type birth-death process in mean-field interaction with an ensemble of replicas of the focal process.
We prove that, under quite general conditions, the ensemble's stochastically evolving interaction field converges to a deterministic trajectory in the limit of an infinite ensemble.
In this limit, the replicas effectively decouple, and self-consistent interactions appear as nonlinearities in the infinitesimal generator of the focal process.
We investigate a special case that is amenable to calculations in the context of a phylogenetic birth-death model, and is rich enough to model both carrying capacity and frequency-dependent selection.
mean-field interaction propagation of chaos self-consistent carrying capacity frequency-dependent selection phylogenetic birth-death model
§ INTRODUCTION
§.§ The multi-type birth-death process
The multi-type birth-death process (MTBDP) is a continuous-time Markov chain generalizing the classical birth-death process <cit.> to a finite number of types.
The state of the MTBDP counts the number of individuals (or particles) of each type while they undergo birth, death, and type transition events according to specified rates, which may be arbitrary functions of the current state and of time.
If these rates are linear in the state, the MTBDP can be formulated as a branching process <cit.>.
If additionally the rates for each type are proportional to the count of only that type, the MTBDP is said to be simple, and the rates can be specified particle-wise because particles do not interact.
The general case of nonlinear rates has also been called a multivariate competition process <cit.>, which, as noted by <cit.>, is more restrictive than a multi-type branching process in that the latter allows for increments other than unity, and more general in that the latter is manifestly linear via its defining independence property.
§.§ Phylogenetic birth-death models
The MTBDP has facilitated inference of diversification processes in biological systems, with applications ranging across scales of evolutionary time and biological organization.
Phylogenetic birth-death models assume that a phylogenetic tree is generated by an MTBDP combined with a sampling process that censors subtrees that are not ancestral to any sampled leaves, so that histories are only partially observed.
The diverse flavors of these models are reviewed and introduced with unified notation in <cit.>.
Given a phylogeny, the inferential targets are the birth and death rates, as well as the type transition rates.
Birth and death are variously interpreted as extinction and speciation rates in the context of macroevolutionary studies, or as transmission and recovery rates in the context of epidemiological or viral phylodynamic studies.
The literature contains many variants of this modeling approach.
Depending on the application, the birth and death rates may be assumed to be time-dependent, or depend on particle type, or both.
To facilitate tractable likelihoods, phylogenetic birth-death models assume the restrictive non-interacting simple MTBDP, with particle-wise birth and death rates that depend only on particle type, and possibly on time.
In this case, given a time-calibrated tree, the likelihood—defined via the conditional density of the tree assuming it has at least one sampled descendant—factorizes over branches.
This factorization can be seen to follow from elementary properties of branching processes, adapted to partial tree observation.
The factors <cit.> are given by the solutions to master equations that marginalize over all possible unobserved subtrees subtending the branch, and are computed recursively via post-order traversal (from tree tips to root).
§.§ Biology involves interactions
Despite the robust computational development and wide usage of phylogenetic birth-death models for phylodynamic inference, their biological expressiveness is limited by the assumption that particles do not interact.
Interactions may be essential to evolutionary dynamics.
For example, environmental carrying capacity is a fundamental constraint on the long-term dynamics of any evolving population, and models of experimental microbial evolution generally allow for a transition from exponential growth to stationary phase as the population approaches capacity <cit.>.
As another example, although the simple MTBDP facilitates modeling phenotypic selection via type-dependent birth and death rates, this does not capture frequency-dependent selection, where the fitness of a given type depends on the distribution of types in the population.
In both of these examples, birth and death rates depend on the state of the population process, and this breaks the branch factorization that phylogenetic birth-death models rely on.
As a motivating biological setting for the ideas to follow,
we consider the somatic evolutionary process of affinity maturation of antibodies
in micro-anatomical structures called germinal centers (GCs),
which transiently form in lymph nodes during an adaptive immune response <cit.>.
In a GC, B cells—the cells that make antibodies—diversify and compete based on the ability of the antibodies they express to recognize a foreign antigen molecule.
As GC B cells proliferate, they undergo targeted mutations in the genomic locus encoding the antibody protein that can modify its antigen binding affinity (they undergo type transitions).
Via signaling from other GC cell types, the GC is able to monitor the binding phenotype of the B-cell population it contains, and provide survival signals to B cells with the highest-affinity antibodies (i.e., birth and death rates depend on type).
GCs have been studied extensively in mouse models that allow for experimental lineage tracing and manipulation of the B-cell population process.
In particular, B cells can be fate mapped by genetically engineering them to express a fluorescent protein that marks them with a randomized color at the beginning of the GC evolutionary process <cit.>.
These initially random colors are non-randomly inherited by descendant cells, so a sample of the GC B-cell population at a future time can be partitioned into lineages of cells that share distinct common ancestors at the time of the initial color marking.
Phylogenetic inference can then be used to reconstruct the evolutionary history of a GC B-cell lineage using the DNA sequences of the sampled B cells <cit.>.
GC B cells compete for limited proliferative signalling based on the antigen binding affinity of their B-cell receptors, and the population distribution of binding affinities generally improves as affinity maturation unfolds, so a given binding phenotype may be high-fitness early in the process, but low-fitness later when the population distribution of affinity has improved.
This invokes frequency-dependent selection, where the birth and death rates should depend on the population distribution of types.
GCs are observed to reach a steady-state carrying capacity of several thousand cells, based on limited cell-mediated proliferative signalling, so carrying capacity is likely also important, meaning that birth and death rates should depend on the total population size.
Phylodynamic models have the potential to reveal how evolutionary dynamics is orchestrated in GCs to shape antibody repertoires and immune memories.
However, phylogenetic birth-death models cannot accommodate key features of this system.
<cit.> presented a simulation study using a birth-death model with competition to investigate features of the GC population process, but such agent-based simulations are not amenable to likelihood-based inference for partially observed histories.
This motivates us to investigate a class of interacting MTBDPs that could be used in phylogenetic birth-death processes without breaking the branch factorization required for tractable likelihoods.
§.§ Mean-field interactions between replica birth-death processes
Mean-field theories are a fundamental conceptual tool in the study of interacting particle systems.
The ideas originated in statistical physics and quantum mechanics as a technique to reduce many-body problems—in fluids, condensed matter, and disordered systems—to effective one-body problems <cit.>.
The theory was extensively developed in the context of general classes of stochastic processes, and has since been widely applied across many scientific domains <cit.>.
Motivated by the setting of GC evolutionary dynamics described above, with population-level interaction among many fate-mapped lineages, we set out to develop a mean-field model that couples the birth and death rates in a focal MTBDP (with d types) to the empirical distribution of states—i.e., the mean-field—over an exchangeable system of N replica MTBDPs.
More concretely, this empirical distribution process is a stochastic process taking values in the space of probability measures on ℕ_0^d, where _0 denotes the non-negative integers: the mass assigned by this measure-valued process at time t ≥ 0 to a vector = (y_1, y_2, …, y_d) ∈ℕ_0^d is the proportion of replica processes (including the focal process) that have y_k individuals of type k for 1 ≤ k ≤ d.
We use propagation of chaos theory <cit.> to prove that the empirical distribution process of the N replicas converges to a deterministic probability measure-valued flow as N →∞.
In this limit, the replicas effectively decouple, and the focal process can instead be said to couple to a deterministic external field.
This external field is self-consistent in the sense that, at any time t ≥ 0, it is given by the very distribution of the state of the focal process.
We calculate self-consistent fields using a fixed-point technique, whereby successively solving a sequence of linear systems converges to the desired solution of the nonlinear system <cit.>.
While this introduces nonlinearity in the forward equations of the focal process, a key feature of this limit is that it restores branch factorization in the phylogenetic birth-death model setting, allowing for tractable phylodynamic models with interactions.
We note that there has been some work on mean-field models in the area of superprocesses (continuum analogues of branching processes) – see <cit.>. Finally, <cit.> is tangentially related to our work in that it treats a particular question concerning mean-field interacting single-type birth-death processes.
As the author of this paper observes regarding the literature about mean-field models and propagation of chaos,
“…there are few results in discrete space.”
§ THEORETICAL RESULTS: A MEAN-FIELD INTERACTING MULTI-TYPE BIRTH-DEATH PROCESS WITH GENERAL RATES
We start out by describing a finite system of fairly general locally-strong,
globally-weak interacting MTBDPs.
Ultimately, we are interested in the law of a focal process within an infinite system of such mean-field interacting MTBDPs.
To this end, we establish that the process of the empirical distribution of families converges to a deterministic probability measure-valued flow.
This implies, by a result of <cit.>, that the MTBDPs become asymptotically independent.
In the limit, the law of the focal process can be described by means of a time-inhomogeneous MTBDP.
The time inhomogeneity comes from the deterministic probability measure-valued background flow
that also satisfies the system of Kolmogorov forward equations of the focal process.
One main contribution of our analysis compared to previous studies is that we allow for a quite general transition rate structure.
The rate of a single MTBDP is only restricted to be of at most linear growth
and Lipschitz continuous.
To deal with the technical challenges that come with these general assumptions,
we employ a localization technique and approximate the general system by one that has
bounded transition rates.
A key feature is that the system with bounded rates is in a certain sense close to the one with unbounded rates, uniformly in the system size.
Consider a finite set of types {1,…,d}[d].
The application we have in mind is that each type represents a certain affinity of B-cell receptors.
We equivalently refer to the cells as particles,
in line with the terminology used in the branching process literature.
At the outset, let's envision a germinal center that initially contains a finite collection of N∈ such B cells.
The progeny process of each of the N founding cells in this GC can be modeled as an MTBDP.
During the process of antibody affinity maturation, cells can divide into two daughters of the same type, mutate to one of the other (d-1) affinity types, or die, according to specified rates.
The interaction within lineages is (possibly) strong,
whereas the interaction between the N lineages is weak.
This means that the rates for the ith lineage depend on its state (locally-strong)
and on the empirical distribution of MTBDP states over the N lineages (globally-weak).
Note that this includes the special case of rates that depend on the global type distribution aggregated over all N families in the GC.
Initially, there are N∈ founding particles within the GC.
A state of this system is then given by =(_1,…,_N)∈ (_0^d)^N,
where for j∈ [N] and i∈[d],
z_j,i counts the number of type-i particles in the jth MTBDP.
Let (_0^d) denote the probability measures on _0^d.
This space is embedded into the Banach space of finite signed measures on _0^d equipped with the total variation norm.
For ν∈(_0^d) and ∈_0^d,
let ν_{}ν({}).
Then the total variation distance between ν,ν'∈(_0^d) is
‖ν -ν'‖_TV1/2∑_∈_0^d|ν_{}-ν_{}'|.
The empirical distribution of MTBDP states of an N-system in state is
π_1/N∑_j=1^N δ__j∈(_0^d).
For example, for =(y_1,…,y_d)∈_0^d and ∈ (_0^d)^N,
π_({}) counts the relative frequency of lineages with composition ,
i.e. with y_i particles of type i, i∈[d].
Let (_0^d){π_∈(_0^d): ∈ (_0^d)^N}, i.e.
the probability measures that can arise as an empirical distribution of an MTBDP system with N initial particles.
Every successive change in the system affects only one particle at a time with a rate depending on the local state of its lineage,
and the empirical distribution over the population of N lineages.
That is, for i k∈ [d], we have the following per lineage rates of various events
b^i :_0^d×(_0^d)→_+ (birth-rate of type-i particles),
d^i :_0^d×(_0^d)→_+ (death-rate of type-i particles),
m^i,k :_0^d×(_0^d)→_+ (mutation-rate from type-i to type-k particles).
Throughout we assume that if z_j,i=0,
then d^i(_j,π_)=m^i,j(_j,π_)=0.
We will assume that the rates per lineage grow at most linearly with the number of particles in the lineage, and that the rates are Lipschitz continuous in the following sense.
For ∈_0^d, set := ∑_i ∈ [d] y_i∈_0.
* There exists a constant L such that for all i, k ∈ [d], i k, ∈_0^d and ν∈(_0^d),
b^i(,ν) ≤ L (+1),
d^i(,ν) ≤ L (+1),
m^i,k(,ν)≤ L (+1).
* There exists a constant L such that for all i, k ∈ [d], i k, ,'∈_0^d and ν,ν'∈(_0^d),
| b^i(,ν)-b^i(',ν')| ≤ L (|-'|+‖ν-ν'‖_TV) ,
| d^i(,ν) -d^i(',ν')| ≤ L (|-'|+‖ν-ν'‖_TV),
| m^i,k(,ν)-m^i,k(',ν')| ≤ L (|-'|+‖ν-ν'‖_TV).
The system of MTBDPs is formally described through its infinitesimal generator, which requires some notation.
To add and remove particles of type i in the jth MTBDP,
we use _j,i∈ (_0^d)^N,
where (_j,i)_k,ℓ=_k(i)_ℓ(j).
The domain of the generator is described using specific function spaces.
We write C̅((_0^d)^N) for the space of (continuous) bounded functions
on (_0^d)^N and Ĉ((_0^d)^N) for the space of
(continuous) bounded functions on (_0^d)^N that vanish at infinity.
Moreover, we write C_c((_0^d)^N) for the space of compactly supported (finitely supported) (continuous) functions on (_0^d)^N. For ∈ (_0^d)^N, set := ∑_j∈ [N]_j := ∑_j ∈ [N]∑_i ∈ [d] z_j,i∈_0.
The generator A^N of the finite system of interacting MTBDPs acts on f ∈Ĉ((_0^d)^N)
via A^Nf():=∑_j=1^N (A_^N,j+A_^N,j+A_^N,j)f() with
A_^N,jf() :=∑_i=1^d b^i(_j,π_)[f(+_j,i)-f() ]
A_^N,jf() :=∑_i=1^d d^i(_j,π_)[f(-_j,i)-f() ]
A_^N,jf() :=∑_i,k ∈ [d], i k m^i,k(_j,π_)[f(+_j,k-_j,i)-f() ].
Define
Δ_N{f ∈Ĉ((_0^d)^N): ↦_∙∙ f() ∈C̅((_0^d)^N), A^N f ∈Ĉ((_0^d)^N)}.
The closure of {(f, A^N f): f ∈Δ^N} is single-valued and
generates a Feller semigroup on Ĉ((_0^d)^N).
Moreover, C_c((_0^d)^N) is a core for this generator.
The proof of the proposition is in <ref>. The canonical process at time t ≥ 0 is denoted by
^N(t) (^N_1(t),…,^N_N(t))
and we set ^N(^N(t))_t≥ 0.
We write P^N for the distribution of ^N.
The system exhibits exchangeability among the MTBDPs due to the symmetries of the rates,
provided that their initial distribution is also exchangeable.
To formally establish this property, we utilize the Markov mapping theorem.
As a result of this analysis, we also derive the Markovian nature of the system's empirical distribution process, subject to suitable initial conditions.
The empirical distribution of states in the system at time t ≥ 0 is
Π^N(t) 1/N∑_j=1^N δ__j^N(t).
The (_0^d)-valued empirical measure process becomes Π^N=(Π^N(t))_t≥ 0.
Its infinitesimal generator B^N acts on a subset of C̅((_0^d)),
the bounded continuous functions on (_0^d).
More specifically,
B^N acts on functions of the form
h(ν)=1/N!∑_∈ (_0^d)^N: π_=ν f(),
with f∈Δ_N,
via B^Nh(ν):=(B_^N+B_^N+B_^N)h(ν), where
B_^Nh(ν) :=∑_∈_0^d∑_i=1^d N ν_{} b^i( ,ν)[h(ν+δ_ +_i-δ_/N)-h(ν)] ,
B_^Nh(ν) :=∑_∈_0^d∑_i=1^d Nν_{} d^i( ,ν)[h(ν+δ_ -_i-δ_/N)-h(ν)],
B_^Nh(ν) :=∑_∈_0^d∑_i,k ∈ [d], i k N ν_{} m^i,k( ,ν)[h(ν+δ_ +_k-_i-δ_/N)-h(ν)],
with _i the ith unit vector in _0^d.
To formally state the exchangeability of the system and the Markovian nature of Π^N, we require some notation. Let α^N(ν,) be a kernel from (_0^d) to (_0^d)^N
defined via
α^N(ν, ):=1/N!∑_∈ (_0^d)^N: π_=νδ_(),
i.e. α^N(ν,·) puts mass uniformly among all the system states ∈ (_0^d)^N
that are compatible with an empirical distribution ν.
For f∈C̅((_0^d)^N),
we write α^Nf(·)=∑_∈ (_0^d)^N f()α^N(·,).
(In particular, α^Nf∈C̅((_0^d)).)
Let ν^N_0∈(_0^d) and assume ^N(0) has distribution α^N(ν^N_0,·).
For all t≥ 0, ^N(t)=(^N_1(t),…,^N_N(t)) is exchangeable and
Π^N is a Markov process with generator B^N.
In what follows, we consider a limit of large systems.
For the germinal center application, this means that we assume the initial number of B cells to be large.
Any dependence of the rates on the total mass therefore is meant to be relative to the initial mass.
To establish the limiting object,
we rely on the propagation of chaos theory.
To make this precise, we recall the following definition <cit.>.
Let E be a separable metric space,
for each N∈, let η_N∈(E^N) be symmetric,
and η_∞∈(E).
Then (η_N)_N∈ is said to be η_∞-chaotic
if for all k≥ 1 and f_1,…,f_k∈C̅(E),
the space of bounded continuous functions on E,
lim_N→∞∫_E^N∏_i=1^kf_i(x_i)η_N( x_1,…,x_N)=∏_i=1^k∫_E f_i(x)η_∞( x).
<cit.> also provides a way to test for a sequence to be chaotic.
The following result essentially is <cit.> in our context
(using that P^N is symmetric by Proposition <ref>, and _0^d is Polish).
<cit.>
Let v∈(_+,(_0^d)), the càdlàg functions with values in (_0^d), and
ν^N∈(_0^d).
Let P^N be the law of ^N with initial distribution α^N(ν^N,·). We use
the notation ⇒ to denote weak convergence (i.e. convergence in distribution).
* P^N is v-chaotic if and only if Π^NN→∞v.
* (Π^N)_N∈ is tight if and only if the law of ^N_1=(^N_1(t))_t≥ 0 under P^N is tight.
Our main result describes the behavior of Π^N in the limit of large systems.
Let ν^N_0∈(_0^d) and assume ^N(0) has distribution α^N(ν^N_0,·).
Assume ν^N satisfies ν^Nν∈(_0^d).
Then there exists a unique solution to the initial value problem: v(0)=ν and for all ∈_0^d,
v_{}'(t)= -v_{}(t) ∑_i=1^d (b^i(,v(t))+d^i(,v(t))+∑_k=1, k≠ i^d m^i,k(,v(t)) )
+∑_i=1^d (v_{-_i}(t)b^i(-_i,v(t)) + v_{+_i}(t) d^i(+_i,v(t))
+ ∑_k=1, k≠ i^d v_{-_k+_i}(t) m^i,k(-_k+_i,v(t))).
(with the convention that for ∉_0^d, v_{}(t)=b^i(,v(t))=d^i(,v(t))=m^i,k(,v(t))=0).
Moreover, we have
Π^NN→∞v.
For w∈ C(_+,(_0^d)), consider the time-inhomogeneous MTBDP ^w with
initial distribution ν and
birth, death, and mutation rates at time t given by b^i(^w(t),w(t)), d^i(^w(t),w(t)), and m^i,k(^w(t),w(t)) for i,k∈[d], i k.
Then the flow of one-dimensional marginal distributions of ^v is the flow v from (<ref>).
§ PROOF OF MAIN RESULTS
We begin by proving the properties of the finite system and its empirical measure process. Next, we examine a system of independent Yule-type processes that dominates the interacting MTBDP system, deriving results essential for establishing the properties of Π^N. In the third section, we prove Theorem <ref> using the properties of a localized system. Finally, we establish the properties of the localized system, which we utilize in the proof of Theorem <ref>.
§.§ Properties of the finite system
We initiate our analysis by proving a result concerning the generator of the finite system of MTBDPs.
The proof consist of checking the
conditions of Theorem 3.1 in Chapter 8 of <cit.>.
The kernel that plays the role of the kernel
x ↦λ(x) μ(x, dy) in <cit.> is here
↦∑_j=1^N ∑_i=1^d
[
b^i(_j,π_)δ_+_j,i + d^i(_j,π_)δ_-_j,i +∑_k ∈ [d], k i m^i,k (_j,π_)
δ_+_j,k-_j,i]
We will take the functions γ and η that appear in the statement
of that result to both be ↦χ() (_∙∙∨ 1).
First note that ↦ 1/χ() ∈Ĉ((_0^d)^N), as required in <cit.>.
Secondly,
sup_∈ (_0^d)^N∑_j=1^N∑_i=1^d
[ b^i(_j,π_)
+d^i(_j,π_)
+∑_k ∈ [d], k i m^i,k(_j,π_)
] / χ() < ∞
by <ref>,
and so hypothesis (3.2) of <cit.> is satisfied.
If ' is a point in the support of the measure on the right-hand size of (<ref>),
then | -'|≤ 1
and hence hypothesis (3.3) of
<cit.> is satisfied.
Combining the bound (<ref>) with the observation
of the previous paragraph shows that hypotheses (3.4) and (3.5) of <cit.> hold, and this completes the proof.
Next, we establish the exchangeability of the finite system and demonstrate the Markovianity of the empirical measure process.
We first want to apply <cit.>.
Note that for any h∈C̅((_0^d)) and π_∈(_0^d), we have ∫ h(π_)α^N(π_,)=h(π_).
Define
C^N={(∑_∈ (_0^d)^N f() α^N(·, ),∑_∈ (_0^d)^N A^Nf() α^N(·, ) ):f∈Δ_N }.
We have to verify (the somewhat technical condition) that each solution of the extended forward equation of A^N
corresponds to a solution of the martingale problem.
Assume for now this is true.
We show in Lemma <ref> below that for f()=∏_j=1^N g_j(_j) with g_j∈Ĉ(_0^d), α^N (A^Nf)(π_ )
= B^N(α^N f)(π_).
In particular, Π^N solves the C^N martingale problem.
Thus, by Cor. 3.5 of <cit.> (with γ()=π_ ),
Π^N is a Markov process.
Moreover, by Thm. 4.1 of <cit.>, ^N(t) is then exchangeable.
It remains to verify that each solution of an extended forward equation of A^N corresponds to a solution of the martingale problem. By Lem. 3.1 of <cit.>,
it is enough to verify that A^N satisfies the conditions of Thm. 2.6 of <cit.>, that is,
that A^N is a pre-generator and
Hypothesis 2.4 of <cit.> is satisfied.
Because (_0^d)^N is locally compact,
the latter is satisfied by Rem. 2.5 of <cit.>.
Another consequence of local compactness is that
for A^N to be a pre-generator,
it is enough to verify that A^N satisfies the positive maximum principle <cit.>,
which is easily seen to be the case.
The following lemma is a technical result used in the proof of Proposition <ref>.
For j∈ [N], let g_j∈Ĉ(_0^d) and set f()= ∏_j=1^Ng_j(^N_j).
Then, for ∈ (_0^d)^N and π_∈(_0^d), we have
α_N (A^Nf)(π_ )
= B^N(α_N f)(π_).
We will only show α_N (A^N_ f)(π_)=B^N_(α_Nf)(π_). That α_NA^N_ f(π_)=B^N_(α_Nf)(π_) and α_NA^N_ f(π_)=B^N_(α_Nf)(π_) can be proven in a similar vein. The result then follows from the linearity of A^N and B^N.
We have
α_N A^N_ f(π_ )
= 1/N!∑_∈ (_0^d)^N:
π_=π_∑_j=1^N ∑_i=1^d b^i(^j,π_) [f(+_j,i)-f()]
= 1/N!∑_∈_0^d∑_i=1^d ∑_j=1^N ∑_∈ (_0^d)^N:
π_=π__(^j) b^i(^j,π_) [f(+_j,i)-f()]
= ∑_∈_0^d∑_i=1^d Nπ_() b^i(,π_) [(α_Nf)(π_+(δ_+_i-δ_)/N)-(α_Nf)(π_) ]
=B^N_(α_Nf)(π_).
§.§ A dominating system of Yule-type processes
It will be useful to compare ^N to a system of independent multi-type pure birth processes.
Even though these pure birth processes are _0^d-valued,
they inherit several useful properties of the classic Yule process.
Let _1 be the _0^d-valued pure birth process
transitioning from _0^d∋→+(1,…,1)
at rate 4Ld(+1).
Denote by ^N=(^N_1,…,^N_N) the (_0^d)^N-valued process,
where each ^N_j, j∈[N], is independent and distributed as _1.
We may suppose that ^N_j = _j, where _1, _2, …, is a sequence
of independent, identically distributed processes.
For ,'∈(_0^d)^N, we write ≤' if z_j,i≤ z_j,i' for all j,i.
Let N∈.
Assume <ref> holds
and ^N(0)≤^N(0).
Then ^N and ^N can be coupled such that
for all t≥ 0,
^N(t)≤^N(t) almost surely.
Couple ^N to ^N as follows.
First note that the total transition rate of ^N at time t≥0 is
4LNd(∑_j=1^d (_j^N(t))+1).
The total transition rate of ^N can be bounded,
using Assumption <ref>, by
4LNd(∑_j=1^d (_j^N(t))+1).
Let τ be a jump time of .
Assume that ^N(s)≤^N(s) for all s∈[0,τ].
Set
^N(τ)=^N(τ-)+_j,i, w.p. b^i(^N(τ-),Π^N(τ-))/4LNd(∑_j=1^d (_j^N(τ-))+1), j∈[N], i ∈[d]
^N(τ-)-_j,i, w.p. d^i(^N(τ-),Π^N(τ-))/4LNd(∑_j=1^d (_j^N(τ-))+1), j∈[N], i ∈[d]
^N(τ-)+_j,k-_j,i w.p. m^i,k(^N(τ-),Π^N(τ-))/4LNd(∑_j=1^d (_j^N(τ-))+1), j∈[N], i,k∈[d], i k,
^N(τ-) otherwise.
It is straightforward to check that then ^N has the correct distribution
and ^N(τ)≤^N(τ).
From now on we assume ^N is constructed on the basis of the coupling in Lemma <ref>.
For two probability measure ν,ν'∈(_0^d), we say that ν' stochastically dominates ν if for every 𝐦∈_0^d, ν({: y_i ≥ m_i, i ∈ [d] })≤ν'({: y_i ≥ m_i, i ∈ [d] }); we then write ν≤ν'.
The upper bound of ^N in terms of ^N can be translated to a bound for their respective empirical measure processes.
To this end,
define Π^N_(t):=1/N∑_j=1^Nδ__j(t) and set Π^N_:=(Π^N_(t))_t≥ 0.
Because of Lemma <ref>,
we have Π^N(t)≤Π^N_(t) for every t≥ 0.
Moreover,
^N and Π^N are non-decreasing, i.e.
for all T≥ t≥ 0,
^N(t) ≤^N(T)
and
Π^N_(t)≤Π^N_(T).
Let ν∈(_0^d). There is a unique solution r(r(t))_t≥ 0 to the initial value problem:
r(0)=ν and for all ∈_0^d,
r_{}'(t)= -4Ld ( r_{}(t)+1) +4Ld ∑_i=1^d r_{-_i}(t)
Note that (<ref>) is just the Kolmogorov forward equation of ^N_1.
Since ^N_1 is non-explosive,
the solution to the ODE exists and is unique.
Let ν∈(_0^d) and ν^N∈(_0^d), and assume ^N(0) has distribution α^N(ν^N,·).
Assume ν^Nν.
Then we have that
Π^N_r almost surely.
Fix T ≥ 0. Let D be a countable dense set in [0,T] containing {0,T}.
By the strong law of large numbers, with probability one we have that for all 𝐦∈_0^d
Π^N_(t) ({: y_i ≥ m_i, i ∈ [d] })
r(t)({: y_i ≥ m_i, i ∈ [d] })
for all t ∈ D. By well-known results in real analysis, the monotonicity of the functions involved in the convergence in (<ref>), plus the continuity of the right-hand side give firstly that the convergence holds for all t ∈ [0,T] and secondly that the convergence is uniform.
Consequently, almost surely Π^N_(t)({}) converges uniformly to r_{}(t) on [0,T] for every ∈_0^d.
Given any ϵ > 0 we can choose K such that
Π^N_(T) ({: > K }) ≤ϵ for all N and
r(T) ({: > K }) ≤ϵ. Therefore, using the monotonicity of
Π^N_(t) ({: > K }) and r(t) ({: > K })
we have
lim sup_N →∞sup_t ∈ [0,T]Π^N_(t) - r(t)_TV≤ 2 ϵ.
Since T and ϵ are arbitrary, this completes the proof.
§.§ Proving convergence via localization
We employ a localization argument to establish Theorem <ref>.
The core concept involves freezing families that reach a certain size κ∈.
By utilizing classic methods, we can prove the convergence of the empirical distribution for a system undergoing such freezing.
To begin, we provide a precise description of the dynamics under the freezing mechanism. For any
For κ>0, set
{∈_0^d: ≤κ} and
^N{∈ (_0^d)^N: _j∈, ∀ j∈[N]}.
Let ^N,κ:=(^N,κ_1,…,^N,κ_N) be the system of interacting MTBDPs
that is coupled to ^N, but where lineages cease to evolve once they reach a state where =κ.
Notably, the construction of ^N,κ is therefore also based on the independent Yule-type process system that dominates ^N,
as stated in Lemma <ref>.
Importantly, if ^N,κ(0)≤^N(0), then ^N,κ(t)≤^N(t) holds for all t≥ 0.
For ^N,κ the corresponding κ-truncated birth rates are defined as b^i,κ(,ν)_()b^i(,ν).
Analogously, define the κ-truncated death and mutations rates, d^i,κ and m^i,k,κ.
So under these κ-truncated rates, the MTBDPs that reach size κ are stopped.
The generator A^N,κ of ^N,κ
is then A^N,κf():=∑_j=1^N (A_^N,j,κ+A_^N,j,κ+A_^N,j,κ)f() for f∈Ĉ((_0^d)^N) with
A_^N,j,κf() :=∑_i=1^d b^i,κ(_j,π_)[f(+_j,i)-f() ]
A_^N,j,κf() :=∑_i=1^d d^i,κ(_j,π_)[f(-_j,i)-f() ]
A_^N,j,κf() :=∑_i,k ∈ [d], i k m^i,k,κ(_j,π_)[f(+_j,k-_j,i)-f() ].
Due to state space being (essentially) finite, rendering it compact, we can now state the following proposition (see <cit.>).
The closure of {(f, A^N,κ f): f ∈ C((_0^d)^N)} is single-valued and
generates a Feller semigroup on C((_0^d)^N).
Also in the system with truncated dynamics, the empirical distribution process is Markov.
To be precise,
define for t≥ 0,
Π^N,κ(t):=1/N∑_j=1^Nδ_^N,κ_j(t)∈(_0^d)
and set Π^N,κ:=(Π^N,κ(t))_t≥ 0.
Its infinitesimal generator
B^N,κ is defined in the same way as B^N,
but with the κ-truncated transition rates
and modified domain
(because the domain of A^N,κ is different).
The following can be proved analogously to the proof of Proposition <ref>.
Let ν^N_0∈(_0^d) and assume ^N(0) has distribution α^N(ν^N_0,·).
For all t≥ 0, ^N,κ(t)=(^N,κ_1(t),…,^N,κ_N(t)) is exchangeable and
Π^N,κ is a Markov process with generator B^N,κ.
The proof of Theorem <ref> revolves around three key propositions,
all of which will be proved in <ref>.
All three propositions share the common assumption that ν^N_0∈(_0^d) and ^N(0) has distribution α^N(ν^N_0,·).
Moreover, we assume ν^N satisfies ν^Nν∈(_0^d).
For all T,η>0 and for all κ sufficiently large (depending on η), there is ε(κ,T) such that
[sup_t∈[0,T]‖Π^N,κ(t)-Π^N(t)‖_TV]≤η+ ε(κ,T)
and ε(κ,T) 0.
We have Π^N,κN→∞v^κ,
where v^κ=(v^κ(t))_t≥ 0 is the unique solution to the initial value problem:
v^κ(0)=ν∈(_0^d), for ∈_0^d∖: v^κ_{}(t)=ν_{}(0); and for ∈:
(v_{}^κ)'(t)= -v_{}^κ(t) ∑_i=1^d (b^i,κ(,v^κ(t))+d^i,κ(,v^κ(t))+∑_k=1, k≠ i^d m^i,k,κ(,v^κ(t)) )
+∑_i=1^d (v_{-_i}^κ(t)b^i,κ(-_i,v^κ(t)) + v_{+_i}^κ(t) d^i,κ(+_i,v^κ(t))
+ ∑_k=1, k≠ i^d v_{-_k+_i}^κ(t) m^i,k,κ(-_k+_i,v^κ(t))).
We have that {Π^N}_N∈ is tight.
We now prove Theorem <ref>.
Fix T, η > 0.
By Proposition <ref>, (Π^N)_N ∈ is tight.
Consider (Π^N_n)_n ∈ for a strictly increasing sequence (N_n)_n∈ in .
There exists a weakly convergent subsequence (Π^N_n_ℓ)_ℓ∈ and a càdlàg (_0^d)-valued process Π^⋆ with Π^N_n_ℓℓ→∞Π^⋆.
On the one hand,
by Proposition <ref>,
[sup_t∈[0,T]‖Π^N_n_ℓ,κ(t)-Π^N_n_ℓ(t)‖_TV]
≤η + ε(κ,T)
for κ large enough depending on η.
On the other hand, by Proposition <ref>,
Π^N_n_ℓ,κℓ→∞v^κ.
Let ρ be the following standard metric giving the Skorohod topology on the
space
D_(_0^d)[0,T]
of càdlàg paths from
[0,T] to (_0^d),
ρ(μ, ν)
:=
inf_λ∈Λ(sup_t ∈ [0,T] |t - λ(t)|
∨sup_t ∈ [0,T]μ(t) - ν∘λ(t)_TV),
where the infimum is over all continuous, increasing, bijections λ: [0,T] → [0,T].
(cf. equation (12.13) of <cit.>.
Let W_1 be the Wasserstein–1
metric on the space of probability measures on D_(_0^d)[0,T]
corresponding to ρ; that is,
W_1(P,Q) := inf_R ∫ρ(μ, ν) R(dμ, dν),
where the infimum is over all probability measures R on
D_(_0^d)[0,T] × D_(_0^d)[0,T] that have respective marginals P and Q. Recall that W_1 metrizes weak convergence on the space of probability measures on D_(_0^d)[0,T] (see, for example, Theorem 6.9 of <cit.>). If Φ and Ψ are random elements of
D_(_0^d)[0,T], write W_1(Φ, Ψ) for the
Wasserstein–1 distance between their respective distributions. Observer that if Φ and Ψ happen to be defined on the same probability space, then
W_1(Φ, Ψ)
≤[sup_t∈[0,T]‖Φ(t) -Ψ(t)‖_TV].
Now,
W_1(Π^⋆, v^κ)
≤ W_1(Π^⋆,Π^N_n_ℓ)
+ [sup_t∈[0,T]‖Π^N_n_ℓ(t)-Π^N_n_ℓ,κ(t)‖_TV]
+ W_1(Π^N_n_ℓ,κ, v^κ).
Taking ℓ→∞ leads to the bound
W_1(Π^⋆, v^κ) ≤η + ε(κ,T)
independent of the chosen subsequence (N_n_ℓ).
Since ε(κ,T)→ 0 as κ→∞, we obtain
v^κκ→∞Π^⋆ and
Π^NN→∞Π^⋆ upon taking κ→∞ and η→ 0.
In particular, Π^⋆=v of (<ref>).
§.§ Convergence of the dynamics under truncation
To establish the convergence of the system of MTBDPs that are frozen once they reach the set of frozen states parametrised by κ,
we employ standard methods.
In this regard, we rely on the following result, which is elaborated upon in <cit.> concerning the notation used.
<cit.>
Let (E,r) be complete and separable
and E_N⊂ E.
Let 𝒜⊂C̅(E)×C̅(E) and
v∈(E).
Assume
* the D_E[0,∞) martingale problem for (𝒜,v) has at most one solution, and
the closure of the linear span of 𝒟(𝒜) contains an algebra that separates points,
* for each N∈, X_N is a progressively measurable process
with measurable contraction semigroup {T_N(t)},
full generator 𝒜̂_N,
and sample paths in D_E_N[0,∞),
* {X_N} satisfies the compact containment condition;
that is, for every η>0 and T>0
there is a compact set Γ_η,T⊂ E
such that inf_N (X_N(t)∈Γ_η,T for 0≤ t≤ T)≥ 1-η,
* for each (f,g)∈𝒜 and T>0, there exists (f_N,g_N)∈𝒜̂_N and G_N⊂ E_N such that
lim_N→∞(X_N(t)∈ G_N, 0≤ t≤ T)=1,
sup_N‖ f_N‖ <∞, and
lim_N→∞sup_x∈ G_N‖ f(x)-f_N(x)‖=lim_N→∞sup_x∈ G_N‖ g(x)-g_N(x)‖=0,
* X_N(0)N→∞ v.
Then, there exists a solution X of the D_E[0,∞) martingale problem for (𝒜,v)
and X_N[]N→∞X.
To apply Theorem <ref>, one of the things to check is that {Π^N,κ: N∈} satisfies the compact containment condition.
In fact, we will prove the following stronger result.
Assume ν^N∈(_0^d) and ^N(0)=α^N(ν^N,·),
and that ν^Nν∈(_0^d).
Then, the sequences (Π^N)_N ∈ and (Π^N,κ)_N ∈ both satisfy the compact containment condition.
Fix η,T>0.
By the manner in which we have coupled together the construction of the processes involved,
we have for any t∈[0,T],
Π^N,κ(t)≤Π^N(t)≤Π^N_(t)≤Π^N_(T).
Recall that r is the solution to the Kolmogorov equation of a Yule-type process
(Lemma <ref>),
which is non-explosive.
Since Π^N_(T)r(T), Lemma <ref>,
we have that the collection of
distributions of the sequence
{Π^N_(T): N∈} is tight. Therefore, there exists a compact set K_η,T⊆ such that
P^N(Π_^N(T)∈ K_η, T)≥ 1-η for all N.
It only remains to note that if K is a compact subset of (_0^d), then so is the set ⋃_ν∈ K{μ∈(_0^d) : μ≤ν} and then apply
<ref>.
We are now prepared to prove the convergence of the empirical measure process in a system with truncation.
First we note that the initial value problem can be reduced to a finite system of ODEs.
Its right-hand side is Lipschitz continuous because the rates can be bounded using Assumption <ref>
and because ∈.
Existence and uniqueness of a solution to this system follows from classic theory (e.g. <cit.>).
Note that v^κ also solves (uniquely) the (B^N,κ,ν^κ) martingale problem,
because the martingale problem and the ODE in this
truncated (thus finite dimensional) setting are equivalent <cit.>.
We verify that the conditions of Theorem <ref> are satisfied.
To this end, take E=(_0^d) and
E_N=_1,N(_0^d) in Theorem <ref>.
That there is at most one solution to the (B^N,κ,ν^κ) martingale problem
follows from the discussion at the beginning of this proof.
Moreover, we have that
{f(v)=∏_∈ Yg_(v_) with g_∈Ĉ^1() and Y⊂_0^d, | Y| <∞}⊂C̅^1,f((_0^d))
is an algebra that separates points.
Thus, (1) holds.
Π^N,κ is an _1,N(_0^d)-valued adapted, càdlàg Markov process and thus progressively measurable.
Hence, (2) holds.
Lemma <ref> yields that (3) holds.
For (4), fix h∈C̅^1,f((_0^d)).
Without loss of generality,
we can assume that
h(ν)=h̃(ν_{^(1)},…, ν_{^(k)})
for some h̃∈C̅^1([0,1]^k) with k∈, where ^(1),…,^(k)∈_0^d.
We have to find a sequence {h^N} of functions in the domain of the generator of Π^N,κ
that approximates h (recall its form from (<ref>); but with f∈ C((_0^d)^N) because the truncated system state space is compact).
To this end, set
f̃^N()=h̃(π_( {^(1)} ) ,…, π_({^(k)}))∏_∈_0^d: ∈(π_)(Nπ_({}))!
and h̃^N(ν)=∑_∈ (_0^d)^N: π_=νf̃^N()=h̃(ν_{^(1)},…, ν_{^(k)}).
h̃ is in the domain of the generator of Π^N,κ.
Moreover,
lim_N→∞sup_ν∈(_0^d)| h(ν)-h^N(ν)|=0.
Since h̃ is bounded, also sup_N‖ h^N‖<∞.
Next, we show
sup_ν∈(_0^d)| B^N,κh^N(ν)-B^κh(ν)|0.
To this end, we start showing sup_ν∈(_0^d)| (B_^N,κh^N(ν)-B_^κ h(ν))|0.
Note that by Taylor's formula and <ref>,
| B^N,κ_ h^N(ν)-B_^κh(ν)|
=∑_∈∑_i=1^d ν_{} b^i,κ(,ν)[N(h(ν+δ_+_i-δ_/N)-h(ν))- ∂ h(ν)/∂ν_{+_i} +∂ h(ν)/∂ν_{}]
≤∑_∈{^(1),…,^(k)}Ld(_∙ + 1 ) · O(N^-1).
Since the right-hand side is independent of ν,
sup_ν∈(_0^d)| B_^N, κ h^N(ν)-B_^κ h(v)|0.
Analogously, it can be shown that
sup_ν∈(_0^d)| B_^N,κ h^N-B_^κ h(ν)|0 and sup_ν∈(_0^d)| B_^N, κ h^N-B_^κ h(ν)|0.
Then (4) follows by the triangle inequality.
By assumption, (5) holds.
In particular, we have checked (1)–(5) of Theorem <ref> and thus the result follows.
Next, we prove the bound on [sup_t∈[0,T]‖Π^N,κ(t)-Π^N(t)‖_TV] that is uniform in N.
First assume that the support of ν^N, N∈, and ν is contained in a compact set.
We can assume this compact set to be of the form {: ≤κ^⋆} for some κ^⋆.
In particular, we can assume ^N_1(0)≤κ^⋆.
Let κ>κ^⋆.
For t≤inf{s≥ 0: ^N_j(s)=κ}, we have ^N_j(t)=^N,κ_j(t).
Thus, |{j:^N_j(t)≠^N,κ_j(t)}|≤|{j:^N_j(t)∉}|.
In words:
the count of families that have different composition under the original and truncated dynamics
is bounded from above by
the count of families in the dominating pure birth process
that exited .
Recall that a Yule process starting from (κ^⋆+1) particles and with birth rate 4Ld per particle
is at time t ≥ 0 negative binomial distributed with parameter (κ^⋆+1) and e^-4LD t.
Thus, we have by a comparison of ^N_1 with a Yule process,
the Markov inequality,
and the fact that in this Yule process births occur at rate 4Ld,
(^N_1(t)∉)≤d/κ(κ^⋆ +1)1-exp(-4Ldt)/exp(-4Ldt)ε(κ,t).
In particular,
|{j:^N_j(t)∉}| is bounded above by
a binomial distribution
with parameters N and ε(κ,t).
Then for T≥ 0,
sup_t∈ [0,T]‖Π^N,κ(t)-Π^N(t) ‖_TV ≤sup_t∈ [0,T]|1/N∑_j=1^N _{^N_j(t)≠^N,κ_j(t)}|
≤sup_t∈ [0,T]|{j:^N_j(t)≠^N,κ_j(t)}|/N
≤|{j:^N_j(T)∉}|/N.
Taking expectations yields that [ sup_t∈ [0,T]‖Π^N,κ(t)-Π^N(t) ‖_TV]≤ε(κ,T) as claimed.
If the assumption on the compactness of the support of the initial distributions is dropped,
then we still have that {ν^N} is tight.
Fix η>0. Then there is κ^⋆ such that for all N, ν^N(: >κ^⋆)≤η.
For κ>κ^⋆, we then have
[sup_t∈ [0,T]‖Π^N,κ(t)-Π^N(t) ‖_TV]
≤[ sup_t∈ [0,T]|1/N∑_j=1^N _{^N_j(t)≠^N,κ_j(t)}_{(^N_j(0))≤κ^⋆}|]+ η
≤ε(κ,T)+η.
Finally, we prove tightness of (Π^N)_N ∈.
By Proposition <ref>–(2),
tightness of {^N_1:N∈} implies tightness of {Π^N:N∈}.
We check the Aldous-Rebolledo criterion <cit.>:
If the following two conditions hold,
then {^N_1:N∈} is tight.
* For all N∈, and ε>0, there is N_0∈ and K∈_+ such that
N≥ N_0 ⇒ P^N(sup_t≤ N( ^N_1(t)) >K)≤ε.
* Let 𝒮_n^N be the set of all σ(^N)-stopping times that are bounded by n. For all n∈ and ε>0,
lim_θ↘ 0lim sup_N→∞sup_S,T∈𝒮_n^N: S≤ T≤ S+θ P^N( |(^N_1(T))-(^N_1(S))|≥ε)=0.
Condition (1) holds because ^N_1 can be bounded above by the Yule-type process ^N_1,
which is non-explosive.
For part (2), use the Markov inequality and
the comparison of ^N_1 with ^N_1, and the fact that (^N_1) is non-decreasing to get
P^N(|(^N_1(T))-(^N_1(S))|≥ε) ≤ε^-1[|(^N_1(T))-(^N_1(S))|]
≤ε^-1[|(^N_1(T))-(^N_1(S))|]
≤ε^-1[|(^N_1(S+θ))-(^N_1(S))|].
But ⌈(^N_1)/d⌉ is bounded above by a Yule process.
The number of jumps in [S,S+θ] of the Yule process are dominated by a Poisson distribution with parameter 4Ld(^N(n))θ.
Thus,
[ (^N_1(S+θ))-(^N_1(S))]/ε≤4Ld[(^N(n))]θ/ε0.
This proves (2).
Altogether, we have established tightness of {^N_1: N∈};
and thus, by Proposition <ref>–(2),
tightness of {Π^N:N∈}.
§ A COMPUTATIONALLY TRACTABLE SPECIAL CASE: LOCALLY SIMPLE WITH MOMENT-MEDIATED INTERACTIONS
In this section, we study a special case of the mean-field interacting MTBDP defined in <ref> that is amenable to calculations in the context of a phylogenetic birth-death model.
We specialize to a case with no local interactions, and with global interactions mediated by moments of the limiting transition probability v(t) defined by (<ref>).
This class of processes is rich enough to model both carrying capacity and frequency-dependent selection, and does not add undue computational complexity.
We adopt some notation from the phylogenetic birth-death model literature, but write d-vectors in bold.
The simple MTBDP is specified by the particle-wise birth and death rates of each of d types, , ∈^d, respectively, and the type transition rate matrix Γ∈^d× d (with Γ =, and non-negative off-diagonal entries).
As the mean-field interaction, we replace the usual constant death rates with death rates ([v]) that depend on the expected state vector, [v] ∑_∈_0^d v_∈^d.
In terms of the general model of <ref>, the rates in this example are
b^i(, v) = y_i λ_i,
d^i(, v) = y_i μ̃_i([v]),
m^i,j(, v) = y_i Γ_i,j,
where ∈ C(^d,^d) specifies the moment-based mean-field dependence.
Using the above in the limiting ODE (<ref>), we obtain a nonlinear moment equation by multiplying both sides by and summing over ∈_0^d.
Due to the simple rates, this moment equation closes, yielding the initial value problem
r_i(t)' = (λ_i - μ̃_i((t))) r_i(t) + ∑_j=1^dΓ_ji r_j(t), i=1,…,d
(0) = _0,
where _0 is the expected initial state.
The existence and uniqueness of a solution to (<ref>) follows from Theorem <ref> under suitable continuity assumptions on .
Note that a solution of (<ref>) has finite expectation, via Lemma <ref>.
Definition <ref> specializes to a mean-field interaction mediated by the expected state vector (the first moment of the state distribution) as a special case of the general mean-field interactions considered in <ref>, where rates can have more general dependence on the state distribution v(t), as in (<ref>).
In this special case, the interaction field is the solution of the finite-dimensional nonlinear moment equation (<ref>), so we can bypass solving the full infinite-dimensional nonlinear forward equation (<ref>).
§.§ Successive approximations of a self-consistent field
Observe that, if we replace the mean-field interaction (t) in equation (<ref>) with some fixed external field ∈ C(, ^d), we would have a linear ODE system for the moment vector (t).
This suggests a Banach-Picard iteration approach to self-consistent fields, where each iteration k presents a linear d-dimensional ODE system for the moment vector _k(t), fixing the previous iterate _k-1(t) as the external field.
This procedure is an example of a standard fixed-point approach for mean-field self-consistency, solving a sequence of linear forward equations that converges to the solution to a nonlinear forward equation <cit.>.
Given τ>0, an external field ∈ C([0, τ], ^d), and a simple MTBDP with rates that are independent of v, we have
b^i(, v) = y_i λ_i ,
d^i(, v) = y_i μ̃_i(),
m^i,j(, v) = y_i Γ_i,j.
The moment equation for ∑_∈_0^d v_ is
' = A() ,
(0) = _0,
where
A() diag( - ()) + Γ
is the coefficient matrix given the external field ,
and _0 is the expected initial state.
Now define a map 𝒯:C_1([0, τ],ℝ^d)→ C_1([0, τ],ℝ^d) via the solution to the initial value problem (<ref>) on the interval [0, τ], that is, 𝒯[] =.
A fixed-point iteration scheme for the moment-based field is detailed in Algorithm <ref> using Definition <ref>.
Given the form of the field interaction in (<ref>), we have tacitly assumed that the map 𝒯[], defined via solving (<ref>), is contractive in the sense required for the Banach fixed-point theorem to hold.
In general, given a field interaction, we would need to establish contractivity of the resulting map 𝒯[] to ensure iterations converge to the unique solution.
[Linear moment interaction]
As a simple and biologically interpretable example of the special case of Definition <ref>, we take ([v]) = + W [v], where ∈^d is constant and the matrix W∈^d× d parameterizes the interaction.
If W is the matrix of 1s, then each element of W [v](t) is the expected total population size of the focal process at time t, and death rates increase with this total size, enforcing a carrying capacity.
Otherwise, the death rates are also sensitive to the expected relative frequency of each type.
For example, if W is diagonally dominant, then the model includes negative frequency-dependent selection.
Technically, to satisfy Assumption <ref>, we require that this linear term is truncated above some value of the expected total size.
In practice, we take this cut-off to be very large, such that the numerical results below are not impacted.
§.§ Steady states induced by mean-field interaction
While the simple MTBDP displays only trivial steady states (or constant ones, if it is critical), the MTBDP with mean-field interaction admits more interesting behavior.
Steady-state behavior can be examined by imposing a criticality condition on the self-consistent field.
For Example <ref>, steady states _∞∈^d satisfy
(diag( - - W_∞) + Γ)_∞ = 0.
Nontrivial solutions of this system of nonlinear algebraic equations for the critical field can be found numerically with standard root-finding methods, and are indeed steady states as long as the process is supercritical when the field vanishes.
For results on steady state solutions in strongly interacting MTBDPs, see <cit.>.
§.§ Numerical examples
Figure <ref> shows numerical results for the self-consistent field ^∗ computed via Algorithm <ref> for Example <ref> with d=5 types.
The field in this case represents the vector of expected particle counts over the 5 types.
These three examples model carrying capacity, negative frequency-dependent selection, and positive frequency-dependent selection, and all use the same rate parameters , , Γ (Figure <ref>A-C) but different W matrices.
Without mean-field interaction (W = 0), this MTBDP is supercritical, and the expected particle counts grow exponentially (Figure <ref>D).
One particle type has a higher birth rate than the others, so it grows faster.
With a carrying capacity interaction (Figure <ref>E), the population reaches a stationary phase due to a mean-field interaction that increases death rates linearly with the expected population size.
With negative frequency-dependent selection (via a diagonally dominant W), the types are more balanced (Figure <ref>F) because the death rate of a given type is suppressed only by growth of that type.
With positive frequency-dependent selection (via a diagonally non-dominant W), the death rate of a given type is less suppressed by growth of that type than the others, leading to an enhancing effect on the type with the birth rate advantage (Figure <ref>G).
For all these examples, the fixed-point iteration converged rapidly, with less than 50 iterations needed to achieve an L^2-norm tolerance of 10^-6.
Note that, for the interaction specified by Example <ref>, the nonlinear moment equation (<ref>) is of Ricatti type, with only quadratic nonlinearities.
In this case, numerical solutions may be attempted directly on the nonlinear equation, without the self-consistent field iterations of Algorithm <ref>.
However, we have implemented the self-consistent field procedure for this example to demonstrate this conceptually simple and widely used computational technique for mean-field calculations.
A Python implementation producing the results above is available at <https://github.com/WSDeWitt/mfbd>.
This code relies on the package <cit.> for numerical ODE solutions.
To evaluate the map 𝒯[](t) (Definition <ref>), we use Kvaerno's 5/4 method <cit.>—an implicit stiffly accurate solver <cit.>—and order-3 Hermite interpolation for dense evaluation in the time domain.
To adapt step sizes we use an I-controller <cit.>.
§.§ Integrating mean-field interactions in phylogenetic birth-death models
Phylogenetic birth-death models augment the simple MTBDP with a sampling process that results in partially observed histories, and are considered as generative models for phylogenetic trees.
They add two additional parameters: the sampling probability ρ gives the probability that any given particle at a specified final sampling time (the present) is sampled, and the fossilization probability σ gives the probability that a death event before the present is observed.
The tree is then partially observed by pruning out all subtrees that are not ancestral to a sampled tip or fossil.
Computing likelihoods for rate parameters on phylogenetic trees requires marginalizing out all possible unobserved sub-histories, conditioned on the partially observed history.
We briefly outline this calculation, augmented with mean-field interactions.
We use notation like that of <cit.> and <cit.>.
Given the parameters for the system in Definition <ref>, and measuring time backwards from the present sampling time, the probability density requires solving three coupled initial value problems (the standard case without mean-field interactions solves two systems).
First, the self-consistent fields (t) are calculated as in <ref> by solving a sequence of d-dimensional initial value problems (we reverse time such that the process starts at the tree root time τ>0, and ends at t=0).
Next, we need as an auxiliary calculation the probability p_i(t) that a particle of type i at time t (before the present) will not be observed in the tree—that is, it will not be sampled and will not fossilize.
These are given by the system of backward equations (of Riccati type)
p_i'(t) = λ_i p_i(t)^2 - (λ_i + μ_i + ∑_j=1^d W_ijr_j(t)) p_i(t)
+ ∑_j=1^d Γ_ij p_j(t) + (1-σ)(μ_i + ∑_j=1^d W_ijr_j(t))
p_i(0) = 1-ρ,
where is given as in Section <ref>.
These are solved on the interval [0, τ] where τ is the age of the root of the tree.
Finally, we compute the likelihood contribution for each of B tree branches b=1… B ∈_0.
Fixing some branch b with type i spanning the half-open interval (t_1, t_2], let q_i(t) denote its branch propagator, defined as the solution of the backward equation
q'_i(t) = (2λ_i p_i(t) + Γ_ii - λ_i - μ_i - ∑_j=1^d W_ijr_j(t))q_i(t)
q_i(t_1) =
ρ, if branch b leads to a sample at t_1=0
σμ_i, if branch b leads to a fossil at t_1>0
λ_i q_left(t_1)q_right(t_1), if branch b splits at time t_1>0
Γ_ijq_j(t_1), if branch b transitions to type j at time t_1>0
where q_left and q_right denote the propagators of the left and right children of branch b.
This system is coupled via the boundary conditions for each branch, and can be solved recursively by post-order tree traversal, yielded the tree likelihood accumulated at the root.
Standard phylogenetic birth-death models are recovered by setting W=0, and only solving the and systems.
By solving , and systems in the case W 0, it is possible to compute tree likelihoods under phylogenetic birth-death processes that model interactions, while maintaining the efficient post-order calculation of likelihoods.
§ DISCUSSION
Incorporating interactions in birth-death processes is challenging for inference applications, but we summarize some developments.
<cit.> developed techniques based on continued fraction representations of Laplace convolutions to calculate transition probabilities for general single-type birth-death processes, without state space truncation.
<cit.> calculate transition probabilities for the birth/birth-death process—a restricted bivariate case where the death rate of one type vanishes, but rates may be otherwise nonlinear.
<cit.> use branching process approximations of birth-death processes and generating-function machinery <cit.> for moment estimation.
<cit.> study single-type branching processes with strong interactions, restricted to a regime in which duality methods can be used to characterize the stationary distribution.
Instead of the strong interactions considered in the above work, we have introduced an MTBDP with mean-field interactions.
This mean-field system restores (in the limit) the computational tractability of the non-interacting case.
We have established the fairly general conditions under which this process is well-defined, demonstrated how to perform mean-field calculations in the context of a phylogenetic birth-death model, and provided an efficient software implementation.
While we were motivated by evolutionary dynamics of antibodies in germinal centers, we also foresee applications to other somatic evolution settings, such as tumor evolution and developmental lineage tracing, and to experimental microbial evolution.
While we have outlined how to evaluate likelihoods for phylogenetic birth-death models with mean-field interaction, we leave inference on biological data for future work.
We have presented an iterative self-consistent field method to solving the mean-field equations.
Mean-field calculations in physical applications often rely on the self-consistent field method (for example, the Hartree-Fock, and density-functional theories for quantum many-body systems) <cit.>.
The method can suffer from slow convergence, non-convergence, or even divergence of the iterates, although there are several regularization techniques for controlling these issues.
The self-consistent field method applies to our MTBDP setting and seems to behave well for the interactions we consider, but it is possible that it will behave poorly for other interactions.
Moment-mediated interactions may allow for more direct nonlinear solution approaches, however.
We have suppressed explicit time dependence in the particle-wise birth and death rates and for notational compactness, but all the results of <ref> and <ref> extend to the inhomogeneous case (t) and (t) with suitable continuity assumptions in the time domain.
We note, however, that our mean-field approach involves effective time-dependence in the rates even if the intrinsic rates are not explicitly time-dependent.
This effective time-dependence arises from specifying a finite number of dynamical parameters (i.e., the rates and the interaction matrix W) that uniquely determine an effective field via the condition of self-consistency, Theorem <ref>.
Finally, we notice that our mean-field system of N interacting replica trees has a self-similarity property: if we consider a subset of N particles from one of the replicas at time t>0, this looks like the starting configuration of a new N-system.
This suggests that our mean-field model could also be used as an approximation for strong interactions within a single MTBDP.
However, the appropriate notions of convergence and exchangeability are less clear in this case.
The validity of a mean-field approximation for a single self-interacting MTBDP would seem to involve a delicate balance of quenched disorder from early times when the process is small, on the one hand, with the limiting mean-field interaction when the process is large, on the other hand.
We save these questions for future work.
§ ACKNOWLEDGMENTS
WSD thanks Erick Matsen and Gabriel Victora for discussions on carrying capacity and competitive interactions in germinal center evolution, Yun Song for suggesting fixed-point methods for self-consistency calculations, and Volodymyr Minin for discussions on moment-based techniques for birth-death processes.
WSD was supported by a Fellowship in Understanding Dynamic and Multi-scale Systems from the James S. McDonnell Foundation.
EH was funded by the Citadel Fellowship of the Statistics Department at the University
of California at Berkeley.
SH was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Projektnummer 449823447.
toc
elsarticle-harv
|
http://arxiv.org/abs/2307.04673v1 | 20230710161837 | Holographic $T\bar{T}$ deformed entanglement entropy in dS$_3$/CFT$_2$ | [
"Deyou Chen",
"Xin Jiang",
"Haitang Yang"
] | hep-th | [
"hep-th"
] | |
http://arxiv.org/abs/2307.05834v1 | 20230711225853 | Scaling Distributed Multi-task Reinforcement Learning with Experience Sharing | [
"Sanae Amani",
"Khushbu Pahwa",
"Vladimir Braverman",
"Lin F. Yang"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
lemmaLemma
exampleExample
corollaryCorollary
theoremTheorem
myassumAssumption
remarkRemark
definition
definitionDefinition
propositionProposition
mycommfont
KwInputInput
KwOutputOutput
|
http://arxiv.org/abs/2307.04210v1 | 20230709154627 | Investigating the Edge of Stability Phenomenon in Reinforcement Learning | [
"Rares Iordan",
"Marc Peter Deisenroth",
"Mihaela Rosca"
] | cs.LG | [
"cs.LG"
] |
Investigating the Edge of Stability Phenomenon in Reinforcement Learning]Investigating the Edge of Stability Phenomenon in Reinforcement Learning
Rares Iordan [email protected]
University College London, United Kingdom
Marc Peter Deisenroth [email protected]
University College London, United Kingdom
Mihaela Rosca
[email protected]
University College London, United Kingdom
Also at Google Deepmind.
[
Jonatan A. González
Computer, Electrical and Mathematical Science and Engineering Division,
King Abdullah University of Science and Technology (KAUST),
Thuwal 23955-6900, Saudi Arabia
and
Paula Moraga
Computer, Electrical and Mathematical Science and Engineering Division,
King Abdullah University of Science and Technology (KAUST),
Thuwal 23955-6900, Saudi Arabia
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================================================================================================================================
Recent progress has been made in understanding optimisation dynamics in neural networks trained with full-batch gradient descent with momentum with the uncovering of the edge of stability phenomenon in supervised learning <cit.>.
The edge of stability phenomenon occurs as the leading eigenvalue of the Hessian reaches the divergence threshold of the underlying optimisation algorithm for a quadratic loss, after which it starts oscillating around the threshold, and the loss starts to exhibit local instability but decreases over long time frames.
In this work, we explore the edge of stability phenomenon in reinforcement learning (RL), specifically off-policy Q-learning algorithms across a variety of data regimes, from offline to online RL. Our experiments reveal that, despite significant differences to supervised learning, such as non-stationarity of the data distribution and the use of bootstrapping, the edge of stability phenomenon can be present in off-policy deep RL. Unlike supervised learning, however, we observe strong differences depending on the underlying loss, with DQN — using a Huber loss — showing a strong edge of stability effect that we do not observe with C51 — using a cross entropy loss. Our results suggest that, while neural network structure can lead to optimisation dynamics that transfer between problem domains, certain aspects of deep RL optimisation can differentiate it from domains such as supervised learning.
§ THE EDGE OF STABILITY PHENOMENON
<cit.> use a thorough experimental study to shed light on deep learning optimisation dynamics by showing that full-batch gradient descent training in supervised learning exhibits two phases: progressive sharpening and edge of stability. In the first stage of training, progressive sharpening, the leading eigenvalue of the Hessian, λ_1, increases steadily and the loss decreases monotonically. As λ_1 increases, it reaches the divergence threshold of the underlying optimisation algorithm under a quadratic loss assumption (we will call this “the quadratic divergence threshold"); for gradient descent with learning rate η and momentum decay rate β, the quadratic divergence threshold is 1/η (2 + 2β). As λ_1 reaches the quadratic divergence threshold, the edge of stability phenomenon occurs: the loss starts to exhibit short-term instabilities, while decreasing over long time scales; λ_1 no longer steadily increases, but fluctuates around the threshold. For cross entropy losses, the edge of stability area is followed by a decrease in λ_1, while for mean squared error losses λ_1 stays in the edge of stability area. Similar results are shown for stochastic gradient descent, across batch sizes, though the effect is less pronounced as the batch sizes decrease <cit.>.
The edge of stability phenomenon shows that neural network training is strongly affected by the quadratic divergence threshold of the underlying optimisation algorithm, and exceeding it leads to training instabilities. This observation
has garnered a lot of interest, with recent works having analysed the edge of stability phenomenon in supervised learning with both theoretical and empirical tools <cit.>. To the best of our knowledge, no studies on the edge of stability phenomenon have been made outside of supervised learning. We complement this body of work by empirically investigating whether the edge of stability phenomenon in occurs in off-policy deep RL algorithms DQN and C51 across a variety of data regimes, from offline to online learning. Upon acceptance, we will make the code and data used publicly available.
§ CHALLENGES WITH OPTIMISATION IN OFF-POLICY DEEP REINFORCEMENT LEARNING
To investigate whether the edge of stability phenomenon translates to deep RL, we conduct experiments using DQN <cit.> and C51 <cit.>, two off-policy algorithms that model the state-action value function Q(s, a; θ) with a neural network with parameters θ. We study both DQN and C51 as their losses correspond to the mean squared error and cross entropy loss used in supervised learning, studied by <cit.> when investigating the edge of stability phenomenon. For DQN, we investigate the more commonly used the Huber loss, which is quadratic around 0:
min_θ E(θ) =
𝔼_(s, a, s', r) ∼ℛ 12(Q(s, a; θ) - (r + γmax_a' Q(s', a'; θ) ) )^2 , if (·)^2 ≤ 1
𝔼_(s, a, s', r) ∼ℛ|Q(s, a; θ) - (r + γmax_a' Q(s', a'; θ) ) | - 12 , otherwise.
C51 <cit.>, the distributional counterpart of DQN, uses distributional quantiles instead of operating in expectation as in Eq (<ref>), leading to a cross entropy loss; for details, we refer to Appendix <ref>.
Off-policy deep RL algorithms like DQN and C51 differ from supervised learning both through their objectives—which use bootstrapping—and the data present in the replay buffer ℛ used for learning the agent, which can be non-stationary, have noise inserted to help exploration, and can be obtained from the agent's own experience. All the above affect optimisation dynamics in deep RL, and have been studied individually <cit.>. Bootstrapping — the dependence of the regression target in Eq (<ref>) on the Q-function — can lead to increased variance and bias in model updates <cit.>. To mitigate instabilities introduced by bootstrapping often a target network is used, where old parameters updated at regular intervals are used to construct the target instead of the current parameters; both DQN and C51 use target networks.
In online RL, where the replay buffer ℛ is filled with the agent's own experience, the non-stationarity of the data present in the replay buffer ℛ violates the i.i.d. assumption required by many optimisation algorithms and gradient updates might not form a gradient vector field <cit.>.
Offline RL <cit.>, where the agent learns from a fixed replay buffer often gathered from another agent or expert demonstrations, can mitigate some of the training challenges with online RL, but can suffer from poor agent performance due to distributional shift <cit.> — the discrepancy between the data distribution used for learning, present in the offline dataset (the replay buffer), and the distribution the policy encounters during execution.
Given these peculiarities of RL, it is unclear how optimisation effects observed in supervised learning, such as the edge of stability results, transfer to deep RL, and they interact with the behaviour of the loss function. Since the value of the loss function in RL has not been connected with agent performance, many RL works do not study or report loss function behaviour, and focus on the agent reward instead. Here, we focus on the behaviour of the loss function in deep RL and aim to connect it with the value of the leading Hessian eigenvalue through edge of stability results.
§ INVESTIGATING THE EDGE OF STABILITY IN PHENOMENON REINFORCEMENT LEARNING
When studying the edge of stability phenomenon in deep RL,
we aim to isolate the effect of the data distribution from the other aspects of RL, such as the use of bootstrapping. We thus train agents across multiple data regimes, ranging from offline learning to online learning.
We use gradient descent
with and without momentum on MinAtar <cit.>, a subset of Atari games with reduced visual complexity; MinAtar results have been shown to translate to Atari <cit.>. We show results on Breakout in the main text, with Space Invaders results in Appendix <ref>. Since not all the data regimes we consider allow for full-batch training, results in this section use mini-batch training; we provide full-batch training results in Appendix <ref>. Experimental details are provided in Appendix <ref>, and training details of the pre-trained agent used in the offline RL experiments are in Appendix <ref>.
§.§ DQN
. We use a pre-trained agent's greedy policy to generate a replay buffer of 10^6 transitions and use this to train a new DQN agent. This setup is closest to that of supervised learning, and isolates the effect of the RL losses and bootstrapping from RL specific effects on the data distribution. Figure <ref> shows a clear edge of stability effect: the leading eigenvalue λ_1 grows until reaching the quadratic divergence threshold, after which fluctuates around it and the loss function shows increasing instabilities; this is consistent with results using the mean squared error in supervised learning. Consistent with existing results <cit.>, the performance of the agent is poor, likely due distributional shift.
. To address the challenges with distributional shift and increase the diversity of agent experience, instead using greedy actions taken by the pre-trained agent to generate the replay buffer, we use an ϵ-greedy policy with 0.7 probability of taking greedy action from the pre-trained agent and 0.3 probability of a random action. This is akin to the concurrent setting in <cit.> . Results in Figure <ref> show that this intervention vastly improves the reward obtained by the agent compared to the previous setting explored, but optimisation dynamics retain the edge of stability behaviour.
. We train an agent using the last 10^6 transitions obtained from the pre-trained agent's online phase; this is known as the final buffer setting <cit.>. This brings us closer to online RL: we still use a fixed replay buffer to train the agent, but that dataset contains transitions from a changing distribution. This allows us to isolate the effect of the replay buffer being obtained from a series of changing policies from the interactions of the agent's behaviour affecting its own replay buffer, as we see in online learning. Results in Figure <ref> show that here too, we observe the edge of stability phenomenon.
. In online RL, the replay buffer is obtained from the agent's own experience, leading to the related challenges mentioned in Section <ref>. Figure <ref> shows that the leading eigenvalues of the Hessian grow early in training but plateau below the quadratic divergence threshold; despite λ_1 not reaching the quadratic divergence threshold, once it plateaus we observe increased instability in the loss function.
We show full-batch results across the above offline RL cases above in Figures <ref>, <ref>, <ref> in the Appendix, which consistently show that as λ_1 fluctuates around the quadratic divergence threshold the loss function exhibits increased instabilities. We note that while changing the target network can have a short term effect on λ_1, it does not drastically affect it's long term trajectory and the edge of stability phenomenon.
§.§ C51
When investigating the edge of stability effect using C51 across all the above data regimes, we observe that C51 does not clearly exhibit the edge of stability behaviour; we show selected results in Figure <ref> and additional results in Figures <ref> and <ref> in the Appendix. However, similar to the observations in supervised learning with cross-entropy loss <cit.>, we notice that λ_1 grows early in training, after which it consistently decreases. In offline learning (Figure <ref>), the leading eigenvalue λ_1, usually stays under the quadratic divergence threshold, but this is not the case in online learning (Figure <ref>), where λ_1 is consistently significantly above the quadratic divergence threshold. This observation might explain why we observed increased challenges with training C51 in this setting compared to DQN (results in Figure <ref> use stochastic gradient descent without momentum, as using momentum led to very poor results, see Figure <ref> in the Appendix).
§ DISCUSSION
We examined the edge of stability effect in DQN and C51, two off-policy deep RL algorithms on simple environments. Our findings suggest that the edge of stability phenomenon can be induced by neural network optimisation in deep RL, but whether this occurs depends on underlying algorithm. We observed that DQN exhibits the edge of stability behaviour in offline RL, with a diminished effect in online RL. In contrast, we did not observe a consistent edge of stability effect when using C51, but nonetheless did observe a connection between large leading Hessian eigenvalues and challenges in training C51 agents.
Caveats and future work.
Our results were obtained on the MinAtar environment; we hope that future studies will expand our results to a wider range of environments. Following <cit.>, we investigate the edge of stability phenomenon in RL when using gradient descent with momentum; we believe future work can expand our exploration to adaptive optimisers commonly used in RL, such as Adam <cit.>, as has recently been done in supervised learning <cit.>. We further hope future research can connect the leading eigenvalue of the Hessian to the agent's performance, not only the loss, as has been done in supervised learning with generalisation <cit.>.
0.2in
§ ADDITIONAL EXPERIMENTAL RESULTS
§.§ SGD with momentum results for C51 on Breakout
In Figure <ref> we present C51 results on Breakout which do not clearly show an edge of stability effect. In the online regime, the eigenvalues consistently rise orders of magnitude above the quadratic threshold but reach low levels and plateau later in training.
§.§ Experiments on Breakout using SGD with momentum and full-batch
In Figure <ref> we present full-batch experiments for DQN and C51 in the setting , with a zoom in on the first time the quadratic threshold is achieved in Figure <ref>. DQN shows a clear edge of stability effect which is broken later during training where we see increased instabilities. C51 does not show an edge of stability effect with the eigenvalues plateauing over time.
In Figure <ref> we present full-batch experiments for DQN and C51 in the setting , with a zoom in on the first time the quadratic threshold is achieved in Figure <ref>. Similar to the previous setting, DQN shows a clear edge of stability effect which is broken later during training where we see increased instabilities. C51 does not show an edge of stability effect with the eigenvalues plateauing over time.
In Figure <ref> we present full-batch experiments for DQN and C51 in the setting , with a zoom in on the first time the quadratic threshold is achieved in Figure <ref>. Similar to the previous two settings, DQN shows a clear edge of stability effect which is broken later during training where we see increased instabilities. C51 does not show an edge of stability effect with the eigenvalues plateauing over time.
§.§ SGD results for DQN and C51 on Breakout
In Figure <ref> we present DQN results on Breakout with SGD without momentum which do not clearly show an edge of stability effect. In the online regime, the eigenvalues consistently fail to rise to the quadratic threshold but reach low levels and plateau later in training. There is a clear edge of stability effect offline.
In Figure <ref> we present C51 results on Breakout with SGD without momentum which show an edge of stability effect. In the offline regimes, the eigenvalues rise sligtly past the quadratic threshold and then decrease to hover around it. In the online regime, the eigenvalues consistently rise orders of magnitude above the quadratic threshold.
§.§ Results for DQN in the Space Invaders environment with and without momentum
In Figure <ref> we present DQN results on Space Invaders with SGD with momentum which clearly show an edge of stability effect in every offline setting. In the online regime, the eigenvalues consistently rise above the quadratic threshold and it has no effect on the trend of the sharpness (λ_1).
In Figure <ref> we present DQN results on Space Invaders with SGD without momentum which clearly show an edge of stability effect in every offline setting. In the online regime, there exists a trace of the edge of stability behaviour, however, later during training the principal eigenvalue consistently rises above the quadratic threshold.
§ EXPERIMENTAL DETAILS
§.§ Neural architectures
We used a similar neural network architecture for both DQN and C51. The network consists of 1 Convolutional Layer, followed by 1 Fully Connected (FC) Layer with and an Output Layer which depends on the algorithm. The Convolutional Layer has a kernel size of 3 and a stride of 1 and is configured differently based on the game due to different channel numbers. The rest of the details can be found in Table <ref>.
§.§ Algorithms
The pseudocode for the DQN algorithm is presented below[A detailed describtion can be found in <cit.>]:
As an extension of DQN, <cit.> proposed to look at the entire value distribution dubbed Z instead of considering expectations. Such a view permits the definition of distributional Bellman equations and operators[The Mathematics behind Categorical DQN is expanded in <cit.>.]. Z is described discretely by a number N ∈ℕ and V_MIN, V_MAX∈ℝ, and whose support is the set of atoms { z_i = V_MIN + i Δ z : 0 ≤ i < N } with Δ z = V_MAX - V_MIN/N - 1. These atoms represent the "canonical returns" of the distribution and each has probability given by a parametric model θ : 𝒮×𝒜→ℝ^N:
Z_θ (s, a) = z_i with probability p_i(s, a) = e^θ_i(s, a)/∑_j e^θ_j(s, a)
The update is computed via 𝒯̂ Z_θ where 𝒯̂ is an operator but a discrete distributional view poses a problem because because Z_θ and 𝒯̂ Z_θ almost always have disjoint sets. To combat this issue, the update is reduced to multi-class classification by being projected onto the support of Z_θ. Assume that π is the greedy policy w.r.t 𝔼 [Z_θ]. Given a tuple (s, a, r, s', γ) the term 𝒯̂ z_j = r + γ z_j for each atom z_j. The probability p_j(s', π(s')) is distributed to the immediate neighbours of 𝒯̂ z_j via a projection operator Φ whose i^th component is given by[The quantity [ · ]_a^b bounds the argument in the interval [a, b].]:
( Φ𝒯̂ Z_θ (s, a) )_i = ∑_j=0^N-1 [ 1 - | [ 𝒯̂ z_j ]_V_MIN^V_MAX - z_i |/Δ z ]_0^1 p_j (s', π(s'))
In the end, as a DQN derivative, a policy network and a target network model Z_θ and Z_θ̂ (respecting the notation of Algorithm <ref>) with the loss ℒ given by the cross-entropy term of the KL Divergence:
D_KL ( Φ𝒯̂ Z_θ̂ (s, a) || Z_θ (s, a) )
The routine of Categorical DQN is the same as before with the only expectation being the loss computation which is given by:
§.§ Offline RL reproduction details
In this paper we examined three different offline RL replay buffers for Breakout on Minatar:
* 10^6 transitions that were obtained from the experience of a pre-trained agent with no action perturbation.
* 10^6 transitions that were obtained from the experience of a pre-trained agent where during game-play 30% of the actions taken were taken were random (instead of greedy actions being taken).
* 10^6 transitions that were obtained from last 10^6 transitions from the replay buffer of an agent that was trained with Adam, online.
§.§ Optimistion details and how to replicate results
We studied both the full-batch the mini-batch settings of GD and momentum for both algorithms. The mini-batch experiments always used a batch size of 512 and the full-batch experiments were performed on a sub-sample of 10^4 transitions from the original replay buffers which consisted of 10^6 transitions.
When experimenting with gradient descent, the learning rate was 0.01 and when adding momentum the learning rate was 0.01 and the momentum coefficient was β = 0.8. The initial γ parameter to discount reward was 0.99. The agent which generated the replay buffers was trained with Adam with a batch size of 64, learning rate of 0.00025, β_1 = 0.9, β_2 = 0.999 and ϵ = 10^-8. Figure <ref> shows the return obtained for Breakout online with these settings. During offline training, the action executed by the agent is the action present in the replay buffer. During online training, the first 5000 iterations are used to accumulate experiences in the replay buffer, after which the training of the agent starts. During the first 5000 steps the agent is taking random actions. Afterwards, the actions are taken based on a decaying ϵ-greedy policy where ϵ decreases linearly from 1.0 to 0.1 for 100000 iterations. For C51, we used 51 atoms with V_MIN = -10 and V_MAX = 10.
The eigenvalues were logged at every 100 iterations. Whenever "Avg Return" was mentioned, that referred to a moving average of the return per episode. It was calculated based on the following routine: avg_return[i] = 0.99 * avg_return[i-1] + 0.01 * return_per_episode[i] where avg_return[0] = return_per_episode[0].
The datasets for Breakout and Space Invaders used for experiments are available https://github.com/riordan45/rl-edge-of-stabilityhere. In addition, we are able to provide datasets for performing similar experiments on Asterix, Freeway and Seaquest, the other games present in the Minatar testbed.
|
http://arxiv.org/abs/2307.04038v1 | 20230708195154 | Prediction of short stellar activity cycles using derived and established empirical relations between activity and rotation periods | [
"A. k. Althukair",
"D. Tsiklauri"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Vol.0 (20xx) No.0, 000–000
Department of Physics and Astronomy, School of Physical and Chemical Sciences, Queen Mary University of London,
Mile End Road, London, E1 4NS,
UK; [email protected], [email protected]
Physics Department, College of Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, PO Box 84428, Saudi Arabia
Received 20xx month day; accepted 20xx month day
In our previous work, we searched for super-flares on different types of stars, focusing on G-type dwarfs using entire Kepler data to study statistical properties of the occurrence rate of super-flares. The said study also considered how the statistics change with stellar rotation period, which in turn, had to be determined. Using such new data, as a by-product, we found 138 Kepler IDs of F and G types main sequence stars with rotation periods less than a day (P_ rot<1 d). On one hand, previous studies have revealed short activity cycles in F-type and G-type stars and the question investigated was whether or not short-term activity cycles are a common phenomenon in these stars. On the other hand, extensive studies exist which establish empirical connection between a star's activity cycle and rotation periods. In this study, we compile all available Kepler data with P_ rot<1 d and derive, as well as use plausible, established empirical relations between P_ cyc and P_ rot with the aim to provide predictions for very short 5.13≤ P_ cyc≤ 38.14 d cases in a tabular form. As a result, we invite others to measure P_ cyc using monitoring program of stellar activity (e.g. activity-related chromospheric emission S-index) or similar means for the Kepler IDs found in this study in order put to test the derived and/or established empirical relations between P_ cyc and P_ rot. We also propose an alternative method for measuring very short P_ cyc, using flare-detection algorithms applied to future space mission data.
Althukair & Tsiklauri
Prediction of short stellar activity cycles
Prediction of short stellar activity cycles using derived and established empirical relations between activity and rotation periods.
A. k. Althukair
1,2
D. Tsiklauri
1
=====================================================================================================================================
§ INTRODUCTION
The 11-year cycle of solar activity discovered by Schwabe in 1844 <cit.>, is a significant phenomenon in solar and stellar physics. The cycle is manifested by a periodic change in solar activity, including the appearance of sunspots and changes in the Sun's magnetic field
on this time-scale. Smoothed sunspot numbers have been widely used as a proxy for solar activity over the past four centuries <cit.>.
The idea of the sunspot number was first introduced by <cit.> in the mid-19th century, and it has since become a standard measure for quantifying solar activity. These numbers reveal that there are almost regular cycles of about 11 years, reflecting the Sun's magnetic activity.
During the course of a solar cycle, the Sun experiences alternating periods of strong and weak activity known as solar maximum and minimum <cit.>. As the solar cycle progresses, the magnetic field becomes more complex and twisted. This results in the emergence of sunspots, which are dark areas on the surface of the Sun with intense magnetic fields, vary in size and can last from days to several months <cit.>, decaying into bright areas called faculae formed by smaller magnetic concentrations <cit.>. During the active phase of the solar cycle (solar maximum), the number and size of sunspots increase and appear at the solar surface. At the same time, bright faculae also become more prominent. As the cycle progresses, the number of sunspots decreases, the overall brightness of the Sun remains relatively constant and the Sun enters its least active phase of the solar cycle (solar minimum). These dark and bright features on the Sun's surface contribute to the variability in the total solar irradiance (TSI) <cit.>. Therefore, the TSI data can capture the combined effects of the evolving dark and bright features during the solar cycle <cit.>.
Cyclic activity has been observed in stars other than the Sun through long-term brightness changes associated with increased occurrence of active regions on their surfaces or in their lower stellar atmospheres <cit.>. The Mount Wilson HK program, which started in 1966 and lasted until the end of the 20th century, was the first to conduct a systematic search for activity cycles in main sequence stars <cit.>. By analysing
chromospheric emission in the spectral lines of Ca II H&K, as the magnetic field connected to active regions on the surfaces of stars plays an important role in transporting energy into the chromosphere. This increased energy input into the chromosphere leads to enhanced chromospheric emission, which can be observed prominently in the cores of the Ca II H&K spectral lines <cit.>. The measure of the chromospheric emission strength is described by the Mount Wilson S-index <cit.> or by the quantity R^'_ HK <cit.>. <cit.> investigated the chromospheric activity levels in main-sequence F-G-K-M stars by measuring the chromospheric CaII H&K emission fluxes. They noted that these stars display varying degrees of chromospheric activity and observed a noticeable lack in the number of F-G stars displaying intermediate activity compared to both highly active and less active stars. They suggested that the absence of such stars could be attributed to a decline in chromospheric activity as the stars age. <cit.> examined the relationship between chromospheric activity, specifically the R^'_ HK activity index, and the Rossby number Ro = P_ rot/τ_ c for a sample of main-sequence stars of spectral type F or later. Where P_ rot is the rotational period of the star and τ_ c is a theoretically derived convective turnover time. They found a strong correlation between the R^'_ HK activity index and the Rossby number. However, in contrast to the findings of <cit.>, <cit.> did not find any signs of the "Vaughan-Preston gap". <cit.> investigated the empirical relation between rotation period P_ rot, spectral type, and activity cycle period P_ cyc for 13 slowly rotating main-sequence stars. They found that the cycle period is related to the rotation period by a power law: P_ cyc∝ P_ rot^ 1.25. This relationship can alternatively be expressed as
P_ cyc≈ Ro^1.25≈ (P_ rot/τ_ c)^1.25 <cit.>. For stars of spectral type G0-K5, <cit.> observed a pattern of variation in the rotation period and the measure of chromospheric activity (S-index). Their research revealed that the chromospheric activity levels were high in young stars with fast rotation periods. Chromospheric activity and rotation rates of stars in the intermediate age range were average. Alternatively, the chromospheric activity levels were low in old stars with slow rotation periods. This observation supports the existence of the Vaughan-Preston gap <cit.>, indicating that chromospheric activity and rotation change over time as the stars age. The relation between rotation periods and activity cycles of a sample of stars was investigated by <cit.>, who discovered a correlation between the two variables. In particular, they observed that stars with slower rotation periods exhibit longer activity cycles, while stars with faster rotation periods tend to have shorter activity cycles. According to <cit.>, the relation between rotation periods and cycle lengths is more evident for stars with shorter activity cycles. However, the association becomes less clear for longer cycle lengths when considering more recent findings on the time variability of solar cycles.
<cit.> investigated the behaviour and activity cycles of four fast-rotating late-type stars with (P_ rot≤ 0.5 days), highlighting the presence of 1-year cycles and the correlation between rotation rate and cycle length. <cit.> used the short-term Fourier transform, a time-frequency analysis method, to examine the light curves of 39 fast-rotating late-type active stars with rotation periods of less than one day. Nine of the selected stars showed indications of activity cycles with periods between 300 and 900 days. These cycles were inferred from the changing typical latitude of the starspots on the stellar surface and due to the differential rotation of the stellar surface, the observed rotation period of the stars varied over the activity cycle. This variation in the rotation period was attributed to the movement and evolution of starspots at different latitudes of the star. <cit.> used four years of Kepler data to determine the cyclic variations in the amplitude of the light curve and the rotation period of stars by analysing a sample of active stars and calculating the rotation period and variability amplitude for each star in each Kepler quarter. Then they searched for periodic variations in these time series using Lomb-Scargle periodograms and employed a false alarm probability (FAP) criterion for selection. The study's findings indicate that amplitude periodicities, associated with underlying activity cycles, are detected in 3203 stars with cycle periods ranging from 0.5 to 6 years and rotation periods ranging from 1 to 40 days. According to <cit.> analysis of new observations and previous data, the longer and shorter cycle periods closely match expectations based on the average activity levels and rotation periods, which indicates a connection between stellar activity and stellar rotation. <cit.> reported an activity cycle of 11.6 years in the F-type star τ Boo (HD 120136). However, the authors assigned a FAP "poor" grade to this finding. <cit.> detected an activity cycle with a duration of 122 days in their analysis of the S-index data of τ Boo. This short activity cycle periods suggest that τ Boo may exhibit variations on a relatively short timescale. <cit.> focused on exploring the presence of short-term activity cycles in F-type stars, specifically using S-index time series data obtained with the TIGRE telescope. They utilized the generalized Lomb-Scargle periodogram method to analyze the data and search for periodic variations with a maximum length of 2 years. Their sample of F-type stars identified four stars that exhibited cyclic variations with periods of less than a year. However, compared to solar-type stars with well-developed cyclic activity, the amplitude of these short-term cyclic variations in F-type stars was smaller. Based on their findings, <cit.> concluded that the activity behaviour among F-type stars differs from that of the Sun and cooler main sequence stars. By studying 44 main-sequence stars with confirmed activity cycles, and rotation periods, <cit.> examined the relation between the length of the activity cycle and the Rossby number (Ro). They used empirical turnover periods based on the B-V colour index to calculate Rossby numbers, from which they deduced an empirical relationship between the Rossby number and the cycle duration. The study showed linear behaviour in the double-logarithmic relationship between the Rossby number and cycle period. In addition, the relative convection zone depth was found to be correlated with cycle length and convective turnover time.
In paper I <cit.>, we looked for super-flares on different types of stars and focused on G-type dwarfs using entire Kepler data to study
various aspects of statistical properties of the occurrence rate of super-flares.
In paper II <cit.>, as a by-product, we found thirteen peculiar Kepler IDs that are Sun-like, slowly rotating with rotation periods of 24.5 to 44
days, and yet can produce a super-flare and six G-type and four M-type Kepler IDs with exceptionally large amplitude super-flares. As noted previously,
these detections defy our current understanding of stars and hence deserve a further investigation.
In this paper III, the last in this series, we use an empirical connection between a star's activity cycle and rotation periods for a sample of F and G main sequence stars with rotation periods of less than one day.
Here our aim is to provide predictions for very short activity cycle cases in a tabular form and to investigate in the future whether these short activity cycles are a common phenomenon in these stars or not. Section <ref> provides the target selection method. Section <ref> presents the method used in this work which includes the empirical connection relation between P_ cyc and P_ rot. The main findings of the study are presented in Section <ref>, and section <ref> concludes this work with our main conclusions.
§ RELATION BETWEEN ACTIVITY CYCLE AND ROTATION PERIOD
<cit.> model of the α–Ω dynamo introduced the concept of migratory dynamo waves, which play a crucial role in generating the observed solar cycle <cit.>. The α–effect, arising from the twisting of rising magnetic field tubes due to Coriolis forces, creates the poloidal magnetic field required for the next sunspot cycle. This effect is responsible for the reversal of magnetic polarities between successive cycles <cit.>. On the other hand, the Ω–effect, resulting from the differential rotation of the star, generates a toroidal magnetic field by stretching the magnetic field lines in a longitudinal direction. The combination of the α–effect and the Ω–effect leads to the formation of migratory dynamo waves, where the toroidal field is periodically regenerated and transformed into the poloidal field through the action of the α–effect. These migratory dynamo waves propagate and interact within the star's convective zone, causing the cyclic variations in the magnetic field <cit.>.
According to <cit.>,
the magnetic cycle period for G and K dwarfs with convective turnover times (τ_ c) between 11 and 26 days, is found to be proportional to the rotation period as follows:
1/P_ cyc∝(τ_ c / P_ rot)^n,
where n is 1.25.
We quote theoretical prediction of the relation between
star's activity cycle and its rotation periods, which is
equation (6) in <cit.>:
P_ mag_cyc=2 P_ cyc≈√(R_⋆l)P_ rot.
According to the simple theoretical arguments quoted by <cit.>,
the magnetic cycle period P_ mag_cyc is proportional to the rotation period P_ rot. However, there is a modifying factor, l/R_⋆ the relative depth of turbulence, which depends on the stellar structure, which itself may depend on the effective temperature or B-V colour index of the star. Also l here is the length scale of turbulence and R_⋆ is the stellar radius.
§ METHODS
In our study, we adopt the terminology used by <cit.> to categorize branches into two types: the "inactive" branch, referred to as the short-cycle branch P_ cyc^S and the "active" branch, referred to as the long-cycle branch P_ cyc^L. These terms were introduced first time in <cit.>. According to <cit.> this notation is more accurate and aligned with the actual characteristics of the branches. Therefore, they suggested that these terms should be used in future studies to refer to the two branches.
§.§ Reproduction of <cit.> P_ cyc^S vs. P_ rot Fit
In this subsection, we reproduced the fit between P_ cyc^S and P_ rot data from <cit.> to derive the fit parameters. First, we collected the data in Table<ref>, the first 32 rows, from <cit.>, where we obtained the 32 activity cycles on the short-cycle branch P_ cyc^S calculated by <cit.> along with the 32 corresponding rotation periods P_ rot. These cycle lengths and rotation periods can be found in Table 1. Then we plotted in logarithmic scale the rotation periods on the x-axis versus the calculated cycle period on the y-axis as shown in Figure <ref>, using the empirical relation in <cit.> between the cycle periods and rotation periods in logarithmic terms that is given by:
log P_ cyc≈ a+n log P_ rot.
Since the theoretical relation, equation <ref>
implies a linear connection between P_ cyc and P_ rot, we fitted the data using Python least-square fit, a common technique for determining the best-fitting parameters for a given model, for two different slope adjustments as in <cit.>. Also, we computed the R^2 coefficient of determination to measure how well the model fits the data. A R^2 value of 1 means that the predictions from the regression fit the data perfectly. First, we set the slope n to be 1 and deduced the value of a parameter as a = 1.923 ± 0.025 and the value of R^2= 0.89. The red line in Figure <ref> illustrates this trend. Then we repeated the fit by treating slope n as an independent variable to derive a and n values as equation now <ref> becomes:
log P_ cyc≈ (1.458 ± 0.074)+(1.348 ± 0.054) log P_ rot.
and the value of R^2= 0.95. The blue line in Figure <ref> represents this fit. It is obvious that the n = 1 relation does not fit the short periods data, as <cit.> pointed out.
By comparing the value of a and n parameters here with <cit.>, we find slight differences between these values. As in <cit.> a = 1.918 ± 0.027 for the fit of n=1 , while for the fit where n is treated as a free parameter, a= 1.488 ± 0.092 and n= 1.324 ± 0.067. We noticed two additional points in Figure 1 of <cit.>, which belong to stars HD 100563 and HD 201092. These stars have rotation periods of 7.73 ± 0.04 and 37.8 ± 7.4, respectively, corresponding to cycle lengths of 0.609 ± 0.009 and 11.7 ± 0.4, respectively. Their P_ cyc were taken from <cit.> and <cit.>, respectively, and have not been calculated by <cit.>. We do not have these two points because our plot include only data computed by <cit.>. We also noticed that the locations of some points in our plot differ from those in <cit.> plot, despite using the same data set. We believe these reasons led to the slight difference in the fit parameters between this work and <cit.>.
§.§ Data representation and fit
In this subsection, we repeat the fit between P_ rot and P_ cyc^S using a larger data sample taken from other previous studies. This sample, shown in Table<ref>, contains 94 P_ rot and their 94 corresponding P_ cyc^S. The star ID, spectral type (Sp), color index (B-V), effective temperature (T_ eff), P_ rot and P_ cyc are shown in Table<ref>. Unavailable data is left blank in the table. 32 P_ cyc^S were calculated by <cit.>, the first 32 lines in Table<ref>. The other P_ cyc^S were taken from <cit.>. It should be noted that the 32 stars IDs for which their P_ cyc^S were calculated by <cit.> were used again in the fit but with the P_ cyc^S calculated by others. For illustration, we used two P_ cyc^S values for 32 stars IDs, one was calculated by <cit.> and the other was calculated by another work, except for KIC 10644253, for which we collected three P_ cyc^S calculated by <cit.>. Also, HD 16673 has multiple entries due to the multiple sources, as shown in Table <ref>. References for each P_ rot and P_ cyc^S are shown in Table <ref>.
In the same way as in subsection <ref>, we used the empirical relation between P_ rot and P_ cyc in logarithmic scale given by equation <ref> using the new data set in Table<ref> to produce the fit parameters a and n. We performed a least-square fit in Python to fit the data using two different slope adjustments again, one with a fixed slope n of 1 and another with the n treated as a free variable. This fit is shown in Figure <ref>. For the fit with a fixed slope of 1, we determined the value for the parameter a= 1.889 ± 0.023 and R^2= 0.83. This trend is shown by the red line in Figure <ref>. While for the fit with the slope n treated as a free variable, we deduced values for the parameters a and n as a=1.583 ± 0.064, n=1.257 ± 0.051 and R^2= 0.87. This fit is represented by the blue line in Figure <ref>. So that equation<ref> becomes now
log P_ cyc≈ (1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
We note that our value of n=1.257 ± 0.051 with the extended dataset is
closer to <cit.>'s n=1.25 than <cit.>'s n= 1.324 ± 0.067.
|cccc|cc|cc|
list of star IDs with their parameters, used in previous studies.
1|cHD/KIC 1cT_ eff 1cB-V 1cτ_ c 1|cP_ rot[d] 1c|Ref 1cP_cyc^S[yr] 1c|Ref
8c
– continued from previous page
1|cHD/KIC 1cT_ eff 1cB-V 1cτ_ c 1|cP_ rot[d] 1c|Ref 1cP_cyc^S[yr] 1c|Ref
8|r|Continued on next page
Sun 5777 0.642 33.94 25.4±1 1 10.3 15
HD 3651 5211 0.850 61.18 44 1 11.7 15
HD 4628 5120 0.890 65.19 38.5±2.1 1 9.9 15
HD 10476 5244 0.836 59.83 35.2±1.6 1 9.2 15
HD 10780 5321 0.804 56.87 22.14±0.55 2 5.6 15
HD 16160 5060 0.918 68.16 48±4.7 1 12.4 15
HD 16673 6183 0.524 18.02 5.7 3 0.9 15
HD 17051 6045 0.561 21.98 8.5±0.1 1 1.4 15
HD 22049 5140 0.881 64.27 11.1±0.1 1 2.6 15
HD 26965 5282 0.820 58.33 43 1 11.5 15
HD 30495 5804 0.632 32.16 11.4±0.2 1 1.6 15
HD 32147 4801 1.049 83.93 48 1 11.7 15
HD 43587 5876 0.610 28.58 22.6±1.9 4 10.4 15
HD 75332 6089 0.549 20.60 4.8 5 0.5 15
HD 75732 5167 0.869 63.05 37.4±0.5 6 9.7 15
HD 76151 5714 0.661 37.58 15 1 2.4 15
HD 100180 6013 0.570 23.06 14 1 3.4 15
HD 103095 5449 0.754 52.52 31 1 9.6 15
HD 120136 6245 0.508 16.54 3.05±0.01 7 0.3 15
HD 128621 5098 0.900 66.24 36.2±1.4 1 9.2 15
HD 140538 5645 0.684 42.51 20.71±0.32 8 4.5 15
HD 146233 5741 0.652 35.81 22.7±0.5 1 7.2 15
HD 149661 5265 0.827 58.98 21.1±1.4 1 5.3 15
HD 160346 4975 0.959 72.75 36.4±1.2 1 9 15
HD 165341 A 5188 0.860 62.16 19.9 1 4.9 15
HD 166620 5151 0.876 63.76 42.4±3.7 1 11.1 15
HD 185144 5366 0.786 55.26 27.7±0.77 2 7.3 15
HD 190406 5910 0.600 27.09 13.9±1.5 1 2.6 15
HD 201091 4764 1.069 86.64 35.4±9.2 1 8.3 15
HD 219834 B 5055 0.920 68.38 43 1 11 15
KIC 8006161 5234 0.840 60.21 29.8±3.1 1 7.7 15
KIC 10644253 5943 0.590 25.67 10.9±0.9 1 1.8 15
HD 16673 6183 0.524 18.02 7.4±0.07 5 0.85 5
HD 49933 3.45 5 0.58 5
HD 75332 6089 0.549 20.60 4.8 5 0.49 5
HD 100563 7.73 5 0.61 5
τ Boo 0.480 14.23 3.5 5 0.33 5
Kepler 87 12.59±0.03 9 3.5 16
KIC 10644253 6030 0.590 25.67 10.91±0.87 10 1.5 17
solar analog HD 30495 5826 0.632 32.16 11.36±0.17 11 1.67±0.35 11
solar analog HD 45184 5871 0.620 30.16 19.98±0.02 12 5.14 12
61 Cyg A HD 201091 4545 1.069 86.64 35.7±1.9 13 7.2±1.3 13
102712791 0.277 4.79 0.96±0.03 14 0.09±0.008 14
102720703 0.514 17.08 10.2±0.6 14 0.512±0.055 14
102721955 0.431 10.94 2.17±0.06 14 1.118±0.071 14
102723038 1.404 147.52 8.6±0.5 14 1.682±0.151 14
102726103 0.767 53.62 3.7±0.1 14 0.321±0.022 14
102738457 0.592 25.95 12.9±0.6 14 1.781±0.356 14
102749950 0.657 36.78 5.4±0.2 14 0.655±0.06 14
102750723 1.143 97.45 1.44±0.02 14 0.277±0.022 14
102754736 0.480 14.23 6.9±0.3 14 0.29±0.019 14
102758108 0.641 33.75 6.1±0.2 14 0.301±0.022 14
102770332 2.055 415.00 4.2±0.1 14 1.162±0.112 14
102770893 0.874 63.56 4.3±0.2 14 0.759±0.058 14
102777006 1.177 102.86 1.33±0.02 14 1.17±0.123 14
102778595 1.157 99.64 11.8±0.7 14 0.575±0.019 14
102780281 1.304 125.85 3±0.1 14 0.551±0.041 14
Sun 5778 0.660 37.38 25.4±1 1 11±2 1
HD 3651 5128 0.840 60.21 44 1 13.8±0.4 1
HD 4628 5035 0.890 65.19 38.5±2.1 1 8.6±0.1 1
HD 10476 5188 0.840 60.21 35.2±1.6 1 9.6±0.1 1
HD 16160 4819 0.980 75.21 48±4.7 1 13.2±0.2 1
HD 17051 6053 0.570 23.06 8.5±0.1 1 1.6 1
HD 22049 5152 0.880 64.17 11.1±0.1 1 2.9±0.1 1
HD 26965 5284 0.820 58.33 43 1 10.1±0.1 1
HD 30495 5780 0.630 31.82 11.4±0.2 1 1.7±0.3 1
HD 32147 4745 1.060 85.41 48 1 11.1±0.2 1
HD 76151 5675 0.670 39.44 15 1 2.5±0.1 1
HD 78366 5915 0.630 31.82 9.7±0.6 1 5.9±0.1 1
HD 81809 5623 0.800 56.51 40.2±3 1 8.2±0.1 1
HD 100180 5942 0.570 23.06 14 1 3.6±0.1 1
HD 103095 5035 0.750 52.19 31 1 7.3±0.1 1
HD 114710 5970 0.580 24.33 12.3±1.1 1 9.6±0.3 1
HD 128620 5809 0.710 48.98 22.5±5.9 1 19.2±0.7 1
HD 128621 5230 0.880 64.17 36.2±1.4 1 8.1±0.2 1
HD 146233 5767 0.650 35.42 22.7±0.5 1 7.1 1
HD 149661 5199 0.800 56.51 21.1±1.4 1 4±0.1 1
HD 160346 4797 0.960 72.86 36.4±1.2 1 7±0.1 1
HD 166620 5000 0.900 66.24 42.4±3.7 1 15.8±0.3 1
HD 190406 5847 0.610 28.58 13.9±1.5 1 2.6±0.1 1
HD 201091 4400 1.180 103.35 35.4±9.2 1 7.3±0.1 1
HD 201092 4040 1.370 139.77 37.8±7.4 1 11.7±0.4 1
KIC 8006161 5488 0.840 60.21 29.8±3.1 1 7.4±1.2 1
KIC 10644253 6045 0.590 25.67 10.9±0.9 1 1.5±0.1 1
HD 165341 A 5023 0.780 54.74 19.9 1 5.1±0.1 1
HD 219834 A 5461 0.800 56.51 42 1 21±1 1
HD 219834 B 5136 0.910 67.30 43 1 10±0.2 1
HD 10780 5321 0.804 56.87 22.14±0.55 2 7.53±0.16 2
HD 16673 6183 0.524 18.02 5.7 3 0.847±0.006 5
HD 43587 5876 0.610 28.58 22.6±1.9 4 10.44±3.03 4
HD 75732 5167 0.869 63.05 37.4±0.5 6 10.9 18
HD 185144 5366 0.786 55.26 27.7±0.77 2 6.66±0.05 2
HD 120136 6245 0.508 16.54 3.05±0.01 7 0.333±0.002 7
HD 140538 5645 0.684 42.51 20.71±0.32 8 3.88±0.02 8
14cm
Notes: The table illustrates a list of stars ID with their corresponding B– V values, effective temperature T_ eff, the convective turnover time τ_ c which was calculated by the relation in <cit.>, the rotation period P_ rot with the reference number and the short branch cycle period P_ cyc^S with the reference number.
References: (1) <cit.>, (2) <cit.>, (3) <cit.>, (4) <cit.>, (5) <cit.>, (6) <cit.>, (7) <cit.>, (8) <cit.>, (9) <cit.>,
(10) <cit.>, (11) <cit.>, (12) <cit.>, (13) <cit.>, (14) <cit.>, (15) <cit.>, (16) <cit.>, (17) <cit.>, (18) <cit.>.
§.§ Data Samples
One of the main challenges in studying the relation between cycle length and rotation period is the lack number of well-known and accurately measured activity cycles. This limitation introduces uncertainties in the derived empirical relations <cit.>. To overcome these challenges, it is crucial to obtain more reliable cycle periods, particularly for long-period cycles. Achieving this requires long-term time series observations of stars to gather comprehensive and accurate data on their activity cycles <cit.>. Therefore, when looking for activity cycles, it is more efficient to monitor fast-rotating objects, as cycles can be discovered within a few years of observation, as opposed to stars with longer rotation periods <cit.>. For this reason, we chose our sample for this study to include fast-rotating main-sequence stars of type F and G from Kepler data with well-known rotation periods of less than one day. First, we collected all Kepler IDs which has well-known rotation periods. We then selected targets with rotation periods of less than a day. Using Gaia Data Release 2 (Gaia-DR2), we identified F- and G-type main sequence stars by their effective temperatures and radius based on the Harvard Spectral classification. The ranges of the effective temperature are 6000-7500 K and 5200-6000 K for F and G types, respectively. We thus obtained a total of 811 Kepler IDs of F- and G- type stars with less than one day rotation period. By using the radius restriction of the main-sequence stars as 1.15-1.4 R_⊙ and 0.96-1.15 R_⊙ for F and G types, respectively, the final data sample reduced to 138 Kepler targets with a number of 83 F-type and 55 G-type main-sequence stars. 71.74% of the rotation periods for these stars were taken from <cit.>. 15.94% from <cit.>, 5.07% from <cit.>, 4.35% from <cit.> and 2.90% from <cit.>. These 138 Kepler targets are listed in Table <ref> with their effective temperature, radius, rotation period and the references for these rotation periods.
§ RESULTS
Using a data set of 138 Kepler IDs with P_ rot ranging from 0.202 d to 0.997 d, we provide a
prediction for the corresponding value of their P_ cyc^S, by applying the empirical relation between P_ cyc and P_ rot with the derived parameters in Equation <ref>. Hence we
obtained the predicted values of P_ cyc from
P_ cyc≈ 10^(1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
From equation <ref>, we calculated 138 P_ cyc for 83 F-type and 55 G-type main-sequence stars whose rotation period is less than a day. The shortest P_ cyc is equal to 5.13 d while the longest P_ cyc is equal to 38.14 d. All the 138 predicted P_ cyc are listed in Table <ref>
|cccccc|cccccc|
lists of the 138 Kepler IDs with their parameters and predicted P_ cyc.
1|cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d] 1cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d]
12c
– continued from previous page
1|cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d] 1cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d]
12|r|Continued on next page
757099 5521 1.05 0.36 1 10.60 6877871 6508 1.40 0.54 2 17.73
1028018 5544 1.14 0.62 2 21.03 6948098 6095 1.29 0.57 3 18.76
1721795 6534 1.31 0.89 2 32.93 6961285 5802 0.98 0.45 2 13.99
1872192 5316 0.98 0.67 2 23.31 6962901 5601 0.97 0.98 2 37.37
2557335 5568 1.01 0.24 2 6.20 7199002 6381 1.24 0.57 2 18.89
2558273 6673 1.35 0.99 2 37.85 7199013 5286 0.96 0.57 2 18.89
2715228 6374 1.30 0.99 1 37.80 7199037 6024 1.36 0.57 2 18.89
2715410 5997 1.11 0.90 1 33.53 7354297 5481 1.05 0.95 2 35.99
2849645 5424 1.06 1.00 2 38.14 7461022 6168 1.28 0.59 2 19.76
2985825 6783 1.23 0.94 3 35.18 7678509 6644 1.22 0.96 2 36.51
3124412 6302 1.21 0.93 1 34.94 7707736 5644 1.09 0.76 2 27.11
3241517 6283 1.34 0.78 3 28.19 7816211 6050 1.32 0.29 2 8.08
3352959 6476 1.37 0.76 2 27.07 7909399 6574 1.40 0.82 2 30.01
3356577 6746 1.39 0.63 4 21.58 7915824 6231 1.39 0.74 2 26.22
3448722 5872 1.13 0.41 2 12.60 7973882 5512 1.06 0.35 2 10.27
3448817 6792 1.33 0.95 4 35.78 8016369 6734 1.34 0.77 1 27.56
3459311 5789 1.05 0.98 2 37.37 8043256 6680 1.27 0.93 2 34.71
3550386 6006 1.30 0.32 2 9.10 8144578 6639 1.32 0.59 2 19.85
3836772 6210 1.32 0.69 2 23.88 8197275 5604 1.14 0.44 2 13.52
3869099 5607 1.01 0.29 2 7.94 8264155 6738 1.33 0.91 4 34.08
4175618 5369 1.05 0.41 2 12.60 8264659 5417 1.12 0.97 1 36.84
4283120 6202 1.25 0.52 2 16.71 8285970 5639 1.14 0.57 2 18.72
4374659 5824 1.03 0.23 2 5.87 8313378 6624 1.31 0.54 2 17.73
4386947 5681 1.14 0.65 2 22.10 8382253 5695 1.01 0.63 3 21.37
4464528 6392 1.38 0.22 2 5.81 8393626 5893 1.15 0.43 2 13.06
4464530 6545 1.30 0.22 2 5.77 8420730 5770 1.08 0.25 2 6.53
4570231 5661 0.99 0.54 1 17.64 8651921 6473 1.29 0.95 2 35.65
4660562 5677 0.96 0.77 1 27.56 8687209 5650 1.00 0.77 1 27.56
4762130 6202 1.35 0.80 2 28.78 8804962 6586 1.23 0.90 2 33.53
4774370 6546 1.36 0.93 2 34.85 8892124 5263 1.01 0.72 2 25.38
4816098 6239 1.29 0.95 1 35.89 8916436 6566 1.35 0.87 1 32.13
4850965 5503 1.04 0.61 2 20.40 9146690 5387 1.11 0.72 2 25.20
4949214 6511 1.36 0.92 2 34.52 9206726 6876 1.31 0.46 4 14.61
4949350 6587 1.40 0.88 2 32.37 9306290 5571 1.04 0.82 2 29.97
4949766 6587 1.39 0.81 2 29.19 9393015 5877 1.01 0.24 2 6.40
5038288 5785 0.99 0.88 2 32.51 9456932 5875 0.97 0.53 2 17.24
5107198 6077 1.36 0.36 2 10.67 9474101 5945 1.10 0.21 2 5.32
5273178 6774 1.32 0.88 2 32.65 9594038 6694 1.31 0.94 4 35.56
5397765 6251 1.34 0.94 2 35.47 9640204 6620 1.33 0.53 2 17.32
5426665 6323 1.38 0.39 2 11.80 9640472 6076 1.34 0.34 2 9.68
5444276 6475 1.31 0.71 2 24.71 9710612 5867 1.08 0.39 2 11.80
5450307 6398 1.24 0.99 3 37.85 9730249 6479 1.34 0.91 2 33.77
5480545 6535 1.31 0.93 2 35.09 9896552 6279 1.26 0.87 1 32.13
5514866 5487 0.97 0.28 2 7.66 9897710 5840 1.08 0.43 2 13.21
5514871 5220 1.06 0.28 2 7.66 9965888 5589 1.13 0.31 2 8.82
5543840 6518 1.20 0.82 2 29.69 9970838 6429 1.25 0.96 2 36.42
5623538 6729 1.32 0.99 1 37.80 10023062 6469 1.38 0.89 2 33.11
5623852 5886 1.10 0.57 2 18.89 10134084 5926 1.00 0.55 5 18.06
5629449 6897 1.31 0.71 1 24.89 10490282 5504 1.05 0.79 2 28.42
5646176 6302 1.20 0.99 1 37.80 10614890 5283 1.06 1.00 2 38.14
5795235 6517 1.36 0.91 2 34.00 10809099 6051 1.31 0.91 2 33.91
5898014 6697 1.35 0.83 2 30.20 11017401 5648 1.09 0.80 2 28.96
5988566 6299 1.20 0.44 2 13.52 11018874 6454 1.30 0.99 2 37.99
6114118 6234 1.24 0.94 2 35.32 11247377 6184 1.38 0.40 2 12.02
6114140 6384 1.16 0.93 3 35.13 11349677 6076 1.23 0.84 1 30.75
6145032 6315 1.28 0.81 1 29.37 11400413 6781 1.34 0.76 4 27.27
6149358 6660 1.28 0.89 2 32.93 11498689 5464 1.10 0.31 2 8.78
6219870 5663 1.05 0.81 1 29.37 11653059 6160 1.26 0.29 2 8.08
6224148 6230 1.18 0.20 2 5.13 11924842 5494 1.13 0.84 5 30.75
6385867 5306 1.06 0.58 1 19.30 11969131 6444 1.23 0.63 1 21.42
6386598 6658 1.37 0.76 2 27.20 12067121 6211 1.33 0.43 5 13.25
6391602 5782 0.99 0.42 2 12.83 12108612 5695 1.09 0.71 2 24.76
6421219 6191 1.36 0.79 2 28.51 12119534 5296 0.98 0.64 2 21.97
6449077 6366 1.31 0.94 2 35.51 12121738 6134 1.31 0.73 2 25.73
6529902 6604 1.38 0.29 2 8.08 12157161 6513 1.26 0.78 2 27.79
6693864 6846 1.35 0.86 1 31.67 12157799 6117 1.17 0.89 5 33.07
6836589 5628 1.15 0.73 2 25.91 12354328 5251 0.97 0.81 2 29.33
6846595 6718 1.26 0.99 1 37.80 12356839 5605 1.14 0.35 2 10.05
6854461 6547 1.39 0.95 3 36.03 12418959 6427 1.36 0.78 2 28.10
14cm
Notes: Effective temperature T_ eff and radius R_⊙ was taken from (Gaia-DR2).
References: (1) <cit.>, (2) <cit.>, (3) <cit.>, (4) <cit.>, (5) <cit.>.
After predicting the values of the activity cycles for our extended, compared to <cit.>, data sample, we wish to examine the theoretical prediction given by Equation 2 on short P_ cyc < 1 yr.
This is because the latter equation is a theoretical prediction, based on first physical principles,
as opposed to empirical fit, which lacks any theoretical or conceptual justification.
Therefore, we focused on the activity cycles derived from previous studies, as presented in Table 1. We chose 20 stars whose P_ cyc is less than a year and plot the fit between P_ rot and P_ cyc as shown in Figure <ref> using a simple linear regression without an intercept given by
P_ cyc [ yr]= n P_ rot [ d].
We obtained the slope n= 0.081 ± 0.009 and R^2 value is 0.997, which is an indication of a good fit, despite of a large scatter.
Note that P_ cyc here is in years, as in Figure 14 from <cit.>.
Therefore, for the lower and upper bounds of our
138 Kepler IDs with P_ rot ranging from 0.202 d to 0.997 d,
this simple theoretically justified equation predicts for
P_ cyc=0.081×0.202×365.25=5.98 d and 0.081×0.997×365.25=29.50 d,
which are not very different from applying the more accurate powerlaw fit using equation <ref> of
5.13 d and 38.14 d, respectively.
Finally, we examine the convective turnover time, τ_c, vs.
B-V colour index appearance as in Figure 3 from <cit.>.
In general, direct measurements of convective turnover time are not possible. However, its estimation is possible by analysing stars' rotation and activity data.
As pointed out by <cit.>, scaling the rotation periods with with a colour- or mass-dependent τ_c can reduce scatter in the relation between rotation and activity, leading to a broken power-law fit between activity and the Rossby number, as e.g. in <cit.>.
<cit.> present a comprehensive study of the convective turnover time, τ_c, and its dependence on stellar metallicity and age of main-sequence stars with masses between 0.6-1.6 M_⊙ and they also
remark that there is a substantial variation between the different models,
as e.g. <cit.> using chromospheric and coronal data, obtained a significantly flatter curve for B-V > 0.8 than widely-used <cit.>, see figure 4 from <cit.>.
We plot convective turnover time, τ_c, vs.
B-V colour index in figure <ref>.
Figure <ref> used the following expressions
for the dependence of the convective turnover time τ_c on the B-V color index, as derived from <cit.>:
logτ_c = (1.06±0.07) + (2.33±0.37) ((B-V) - 0.44)
for 0.44 ≤ B - V ≤ 0.71. In the case when B - V > 0.71 then
logτ_c = (1.69±0.12) + (0.69±0.13) ((B-V) - 0.71).
As can be seen in Figure <ref> our range of B-V-colour is larger compared to data from <cit.>.
§ CONCLUSIONS
In this work, we studied the empirical relation between
star activity cycle and rotation period.
First, we reproduced the fit between P_ rot and P_ cyc using <cit.> data
and obtained the following fit parameters
log P_ cyc≈ (1.458 ± 0.074) + (1.348 ± 0.054) log P_ rot,
which are slightly different from the <cit.>'s
a= 1.488 ± 0.092 and n= 1.324 ± 0.067, for
the reasons unknown to us.
Then, using a larger data set made up of 94 P_ rot and their 94 associated P_ cyc taken from prior studies, we again re-examined the fit between P_ rot and P_ cyc and obtained the followinh fit parameters
log P_ cyc≈ (1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
Using these new parameters, we applied this relation to a sample of 83 F-type and 55 G-type main sequence stars whose rotation periods of less than one day, To provide tabular predictions for cases with very short activity cycles, in order
to determine in the future whether or not these short activity cycles are a common occurrence in these stars.
As a result we derived 138 predicted P_ cyc ranging from 5.13 d to 38.14 d, which are listed in Table <ref>.
Usefulness of measuring short stellar activity cycles
hinges on two main general difficulties:
(i) If monitoring program of stellar activity (e.g. activity-related chromospheric emission S-index or similar) is used
as in references such as <cit.>; or <cit.>, then cadence time of observations is too long
e.g. according to table 2 from the latter reference cadence could be 87 observations per year i.e. 365/87 = 4 days. Resolving activity cycles with 5.13≤ P_ cyc≤ 38.14 d with such cadence would be nearly impossible.
(ii) If Kepler data light curves are used for e.g. plotting number of flares per day vs. time then large number of flare-detection would be necessary to have a reliable statistics. However, the problem is long cadence, 30 minutes, for the mainstream Kepler data. The photometer used by Kepler is sensitive to wavelengths ranging from 400 to 865 nm, covering the entire visible spectrum and a fraction of the infrared. The accuracy of the photometer of Kepler is approximately 0.01% or 0.1 mmag, when 30-minute integration times are used while considering stars with a magnitude of 12. Kepler's 30-minute integration detected flare amplitudes less than 0.1% of the stellar value and energies of 2×10^33 ergs. The duration of the flares ranged from one to three hours, with a rapid increase followed by a slow, exponential decline <cit.>. When Kepler data is taken at a higher cadence or sampling rate of one minute, the accuracy of the measurements decreases. However, this higher cadence enables Kepler to detect flares that are too brief to be detected reliably using the main 30-minute integrations. With the one-minute cadence, Kepler can detect flares with energies as low as 10^32 ergs <cit.>.
It is worth noting that earlier studies exist using different observations where the energy involved in the observed transient brightening is estimated to range from 10^25 to 10^29 erg <cit.>. Also, as far as the Sun is concerned, studies exist <cit.> which consider flare frequency as a function of flare energy in the range 10^27to 10^31 erg, but this is applicable to the Sun only.
In order to have a good statistics for Kepler IDs considered, we need to detect flares with energies 10^27-32 ergs in order to see variation number of flares per day on a time scale of 5.13≤ P_ cyc≤ 38.14 d.
To achieve this goal a new space mission is necessary with short time cadence (< 1 minutes) and photometric accuracy < 0.01%.
A typical example of such proposed sample data from the space mission is shown in figure <ref>.
Alternative option could be making more short cadence ground-based s-index monitoring program of stellar activity with cadence ≈ 1 d or less. However it is unclear
whether this is technically feasible.
In any case, the present study provides predictions for 5.13≤ P_ cyc≤ 38.14 d and
we hope that future either space or ground-based observational missions will put to test our predictions.
Unitl such time the jury is still out.
§ ACKNOWLEDGEMENTS
Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts.
Authors would like to thank Deborah Kenny of STScI for kind assistance in obtaining the data, Cozmin Timis and Alex Owen of Queen Mary University of London for the assistance in data handling at the Astronomy Unit.
A. K. Althukair wishes to thank Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia and
Royal Embassy of Saudi Arabia Cultural Bureau in London, UK for the financial support of her PhD scholarship, held at Queen Mary University of London.
§ DATA AVAILABILITY
Some of the data underlying this article were accessed from Mikulski Archive for Space Telescopes (MAST) <https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html>. This paper also has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. The derived data generated in this research will be shared on reasonable request to the corresponding author.
aasjournal
|
http://arxiv.org/abs/2307.04338v1 | 20230710043023 | Privacy-Preserving Graph Machine Learning from Data to Computation: A Survey | [
"Dongqi Fu",
"Wenxuan Bao",
"Ross Maciejewski",
"Hanghang Tong",
"Jingrui He"
] | cs.LG | [
"cs.LG",
"cs.CR"
] |
Privacy-Preserving Graph Machine Learning
from Data to Computation: A Survey
Dongqi FuFirst two authors contribute equally to this research.^†, Wenxuan Bao^†, Ross Maciejewski^, Hanghang Tong^†, Jingrui He^†
^†University of Illinois Urbana-Champaign
^Arizona State University
[email protected], [email protected], [email protected], [email protected], [email protected]
August 12, 2023
=============================================================================================================================================================================================================================================================================================================================
In graph machine learning, data collection, sharing, and analysis often involve multiple parties, each of which may require varying levels of data security and privacy. To this end, preserving privacy is of great importance in protecting sensitive information.
In the era of big data, the relationships among data entities have become unprecedentedly complex, and more applications utilize advanced data structures (i.e., graphs) that can support network structures and relevant attribute information. To date, many graph-based AI models have been proposed (e.g., graph neural networks) for various domain tasks, like computer vision and natural language processing.
In this paper, we focus on reviewing privacy-preserving techniques of graph machine learning. We systematically review related works from the data to the computational aspects.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information (e.g., graph model parameters) to realize the optimization-based computation when data sharing among multiple parties is risky or impossible.
In addition to discussing relevant theoretical methodology and software tools, we also discuss current challenges and highlight several possible future research opportunities for privacy-preserving graph machine learning. Finally, we envision a unified and comprehensive secure graph machine learning system.
§ INTRODUCTION
According to the recent report from the United Nations [<https://press.un.org/en/2022/sc15140.doc.htm>], strengthening multilateralism is indispensable to solve the unprecedented challenges in critical areas, such as hunger crisis, misinformation, personal identity disclosure, hate speech, targeted violence, human trafficking, etc.
Addressing these problems requires collaborative efforts from governments, industry, academia, and individuals. In particular, effective and efficient data collection, sharing, and analysis are at the core of many decision-making processes, during which preserving privacy is an important topic.
Due to the distributed, sensitive, and private nature of the large volume of involved data (e.g., personally identifiable information, images, and video from surveillance cameras or body cameras), it is thus of great importance to make use of the data while avoiding the sharing and use of sensitive information.
On the other side, in the era of big data, the relationships among entities have become remarkably complicated. Graph, as a relational data structure, attracts much industrial and research interest for its carrying complex structural and attributed information. For example, with the development of graph neural networks, many application domains have obtained non-trivial improvements, such as computer vision <cit.>, natural language processing <cit.>, recommender systems <cit.>, drug discovery <cit.>, fraud detection <cit.>, etc.
Within the trend of applying graph machine learning methods to systematically address problems in various application domains, protecting privacy in the meanwhile is non-neglectable <cit.>. To this end, we consider two complementary strategies in this survey, namely, (1) to share faithfully generated graph data instead of the actual sensitive graph data, and (2) to enable multi-party computation without graph data sharing. Inspired by the above discussion, we focus on introducing two fundamental aspects of privacy-preserving techniques on graphs, i.e., privacy-preserving graph data and graph data privacy-preserving computation.
For the data aspect, privacy-preserving graph data as shown in Figure <ref>, we focus on the scenario that when publishing or sharing the graph data is inevitable, how could we protect (e.g., mask, hide, or perturb) sensitive information in the original data to make sure that the published or shared data could survive from the external attackers (e.g., node identify disclosure and link re-identification). Hence, in Section 2, we systematically introduce various attackers [Throughout the paper, we use “attackers” to denote the attacks on graphs. There are also attackers that are designed not for graphs but for Euclidean data, for example. Those are not in the scope of this paper.] first (Subsection 2.1) and what backgroud knowledge they need to execute attacks (Subsection 2.2). Then, we introduce the corresponding protection mechanisms and explain why they can address the challenges placed by attackers (Subsection 2.3). Also, we share some graph statistical properties (other than graph data itself) privacy protection mechanisms (Subsection 2.4). After that, we list several possible challenges for privacy-preserving graph data generation when facing complex structures and attributes, e.g., time-evolving graphs and heterogeneous information graphs (Subsection 2.5).
For the computation aspect, graph data privacy-preserving computation, we focus on the multi-party computation scenario where the input data is structured, distributed over clients, and exclusively stored (i.e., not shareable among others). Here, federated learning can be a quick-win solution. However, relational data structures (i.e., graphs) bring a significant challenge (i.e., non-IIDness) to the traditional federated learning setting. This means that the data from intra-clients and/or inter-clients can violate the independent and identically distributed assumption (i.e., the i.i.d. assumption) due to the presence of the complex graph features, whose data complexity hinders many existing federated learning frameworks from getting the optimal performance. Motivated by this observation, in Section 3, we first discuss the adaption of federated learning on graphs and the corresponding challenge from non-IIDness brought by graphs (Subsection 3.1), then we introduce how nascent graph federated learning research works to address the non-IIDness issues from three levels, i.e., graph-level federated learning (Subsection 3.2), subgraph-level (Subsection 3.3), and node-level (Subsection 3.4). Then, we list several challenges and promising research directions, including model heterogeneity and avoiding cross-client transmission (Subsection 3.5).
After we introduce privacy-preserving graph data and graph data privacy-preserving computation with their own methodologies, advances, software tools, limitations, and future directions. In Section 4, we envision the necessity of combing these two directions into privacy-preserving graph data privacy-preserving computation to meet any possibility of leaking sensitive information, to further achieve a comprehensive, well-defined, and end-to-end graph machine learning system. Finally, the paper is concluded in Section 5.
Relation with Previous Studies.
For the privacy-preserving graph data, we systematically review the privacy attackers and the corresponding privacy protection techniques, which takes a balance of classic methods <cit.> and emerging solutions <cit.>, such as topology perturbation methods, deep generation methods, etc. Beyond that, we extend the privacy-preserving techniques review from the data level to the computation level, i.e., the graph data privacy-preserving computation within the federated learning framework. Most of the existing federated learning reviews do not primarily concentrate on graph federated learning <cit.>. Recently, two survey papers <cit.> introduce two problem settings in graph federated learning and their corresponding techniques. They exclusively focus on graph federated learning solutions and ignore the connections to traditional federated learning. Thus, we start from various application scenarios and provide a comprehensive classification and exposition of graph federated learning. While our focus primarily revolves around graph federated learning, we also highlight its connections and distinctions to traditional federated learning, aiming to present the big picture of this field. In addition to reviewing the two aspects (i.e., privacy-preserving graph data and graph data privacy-preserving computation), we also discuss the necessity and possibility of combining these two directions and propose several promising future research directions.
§ PRIVACY-PRESERVING GRAPH DATA
As for making privacy-preserving graph data to publish or share, the ultimate goal is to successfully protect the published graph data from various attacks from adversaries or attackers. To this end, we first introduce the different kinds of attackers, such as node identity disclosure or sensitive link re-identification in Subsection 2.1 and necessary background knowledge in Subsection 2.2. Then, we introduce how the corresponding privacy-preserving mechanisms are proposed, such as several of them being deliberately designed to defend against certain attackers and some of them being general protections and not aiming at specific attacks, in Subsection 2.3. The taxonomy is shown in Figure <ref>.
§.§ Privacy Attackers on Graphs
According to <cit.>, what the attackers aim to attack is that they (1) want to learn whether edges exist or not between specific target pairs of nodes and also (2) want to reveal the true identities of targeted users, even from just a single anonymized copy of the graph, with a surprisingly small investment of effort.
§.§.§ Category of Attackers
Attackers can be classified into the active attackers and passive attackers <cit.>.
The first category is active attackers, where the core idea is that the attackers actively plant certain structures into the graph before it is being published. Then, the attackers can identify victims in the published graph by locating the planted structures. For example <cit.>,
the attackers create a subgraph H containing k nodes and then use H to connect b target nodes in the original graph G (subgraph H is better to be unique and has the property to be recovered in the published graph). After the original graph G is privacy-preserved (e.g., mask and disturb connections) and published as G', the attackers try to find H in G' and then determine those b nodes.
Active attackers usually need to access the original graph beforehand and then make corresponding active actions like creating new nodes, linking new edges, and planting subgraphs. The planting and recovery operations are usually computationally costly <cit.>. Therefore, another direction points to passive attacks and defense.
Passive attackers are based on the fact or the assumption that most entities (e.g., nodes and edges) in graphs usually belong to a unique, small identifiable graph. Then, different from active attackers, passive ones do not need to create new nodes and edges in the original but mostly rely on the observation of the published graph to identify victims. In the initial proposal of passive attacks <cit.>, a passive attacker (e.g., a node in a social network) needs to collude with other (k-1) nodes on the original graph, and the coalition needs to know the external information (e.g., their 1-hop neighbors' name in the social network), such that they can reconnect on the published graph to identify the victims. Here, we expand the scope of passive attacks to include the attackers whose core is observation plus little external information. For example, in <cit.>, an attacker knows the external background information like “Greg is connected to at least two nodes, each with degree 2” and tries to observe the candidate of plausible Greg in the published social network.
§.§.§ Goal of Attackers
The ultimate goals of most graph privacy attackers can be roughly divided into disclosing the node identity (e.g., name, DOB, and SSN in the social network) and the link existence (e.g., sensitive connections in the social network) <cit.>. Next, we formally introduce the general definition of these two goals.
Node Identity Disclosure.
The node identity disclosure problem often arises from the scenario that the attackers aim to identify a target node identity in the published graph (usually, which has been anonymized already). For example, in a published social network with usernames masked already, the node identity disclosure aims to identify which node is Greg <cit.>. To be more specific, the identity disclosure can be detailedly divided into node existence disclosure (i.e., whether a target node existed or not in a published graph), node property disclosure (i.e., partial features of a target node are disclosed like its degree, distance to the center, or even sensitive labels, etc) <cit.>.
Link Re-Identification.
In a given graph, edges may be of different types and can be classified as either sensitive or not. Some links (i.e., edges) are safe to release to the public, such as classmates or friendships. And some links are sensitive and should maintain private but not published, like the personal disease records with hospitals. The problem of link re-identified is defined as inferring or predicting sensitive relationships from anonymized graphs <cit.>. Briefly speaking, the adversary (or attacker) achieves the goal when it is able to correctly predict a sensitive link between two nodes. For example, if the attacker can figure out which there is a transaction between two users, given the properties of the released financial graph. Also, there are some detailed categorizations of the line re-identification other than the link existence, such as the link weight and link type or labels <cit.>.
Compared with active attackers, passive attackers are typically efficient in executing for adversaries and do not need to interact with the original graph beforehand very much. Thus, within the scope of passive attackers, achieving those attacking goals (node identity disclosure or link re-identification) relies on the observation of the published graph and certain external background knowledge to further identify victims.[Node identity disclosure and link re-identification can also be achieved in active ways <cit.>, but in the paper, we focus on introducing the passive manners that achieve those goals.] Next, we focus on introducing what requirements passive attackers need to execute attacks passively.
§.§ Background Knowledge for Passive Attacks
Here, we first discuss some background knowledge that could contribute to the goal of node identity disclosure. Then, we list some background knowledge that could contribute to sensitive link re-identification attacks.
§.§.§ Background Knowledge for Node Identity Disclosure
In general, the background knowledge for achieving node identity disclosure is to help them to detect the uniqueness of victims (i.e., nodes in the published graph) and thus narrow down the scope of candidate sets to increase the successful attack probability. For example, assume that the attackers know some background knowledge ℋ about a target node, after that, the attackers observe the published graph and find 2 candidates satisfying the condition (i.e., ℋ), then the attackers have 50% confidence to reveal the identity of that target node in the published graph. Next, we introduce some methods to acquire background knowledge.
Vertex Refinement Queries <cit.>. These are interactive queries, which describe the local structure of the graph around a target node x. The initial query in vertex refinement queries is denoted as ℋ_0(x) that simply returns the label of node x in the labeled graph (or a constant ϵ in the unlabeled graph). And ℋ_1(x) returns the degree of node x. Then, iteratively, ℋ_i(x) is defined as the multiset of ℋ_i-1(·) queries on 1-hop neighbors of node x, which can be expressed as follows.
ℋ_i(x) = {ℋ_i-1(z_1), ℋ_i-1(z_2), …, ℋ_i-1(z_d_x)}
where d_x is the degree of node x. For example, in a social network, ℋ_2(Bob)={1,1,4,4} means that Bob has four neighbors their degrees are 1, 1, 4, and 4, respectively.
Subgraph Queries <cit.>. These queries assert the existence of a subgraph around a target node. Compared with the above vertex refinement queries, subgraph queries are more general (i.e., the information is not exclusively occupied to a certain graph structure) and flexible (i.e., informativeness is not limited by the degree of a target node). In brief, the adversary is assumed capable of gathering some fixed number of edges around a target node x and figuring out what subgraph structure those collected edges can form. For example, still targeting Bob in a social network, when collecting 3 edges, attackers can find 3 distinct neighbors. And collecting 4 edges can find a tree rooted by Bob. Those existences of structures form H such that attackers can use them to reveal the identity of Bob. Also, different searching strategies can result in different subgraph structures. For example, based on collecting 3 edges from Bob, breadth-first exploration may result in a star subgraph, and depth-first exploration may end up with a three-node-line. We refer to <cit.>, where a range of searching strategies are tested to empirically illustrate the descriptive power of background knowledge.
Hub Fingerprint Queries <cit.>. First of all, a hub stands for a node that has a high degree and a high betweenness centrality (i.e., the proportion of shortest paths in the graph that include that node) in the graph. Then, a hub fingerprint is the description of a node's connections to hubs. To be more specific, for a target node x, the corresponding hub fingerprint query ℋ_i(x) records the shortest distance towards each hub in a graph. In ℋ_i(x), i is the limit of measurable distance. For example, ℋ_1(Bob) = (1,0) means Bob is 1 distance away from the first hop and not connected to (or 1 distance non-reachable from) the second hub. And, ℋ_2(Bob) = (1,2) means that Bob is 1 distance away from the first hop and 2 distance away from the second hub.
Neighborhood Relationships Queries <cit.>. Targeting a node, if an adversary has background knowledge about its neighbors and the relationship among the neighbors, then the victim can be identified in the anonymized graph. To be specific, the neighborhood relationship query rely more on the isomorphism of the ego-graph (i.e., 1-hop neighbors) of a target node to reveal its identity, compared with iterative vertex refinement query <cit.> and general subgraph query <cit.>. For example, in a social network, if Bob has two close friends who know each other (i.e., are connected) and two close friends who do not know each other (i.e., are not connected), then this unique information obtained by the adversary can be used to find Bob in the published anonymized graph.
§.§.§ Background Knowledge for Link Re-Identification
Link Prediction Probabilistic Model <cit.>. This probabilistic model is proposed to determine whether a relationship between two target nodes. And different kinds of background information (i.e., observation) can be leveraged to formalize the probabilistic model, such as (1) node attributes, e.g., two social network users who share the same interest are more likely to be friends; (2) existing relationships, e.g., two social network users in the same community are more likely to be friends; (3) structural properties, e.g., the high degree nodes are more likely to connect in a graph; and (4) inferred relationships (i.e., a complex observation that is more likely based on the inference of the invisible relationship), e.g., two social network users are more likely to be friends if they both are close friends of a third user.
Mathematically, those above observations can be expressed for predicting the existence of a sensitive relation between node i and node j as P(e^s_ij|O), where e^s_ij stands for the sensitive relationship and O consists of several observations {o_1, …, o_n}. For example, if we use the second kind of information (i.e., existing relationships), then {o_1, …, o_n} is a set of edges between node i and node j with the edge type other than s, denoted as e^l_ij and l ∈{1, …, n} is the index of other edge relationships. To solve out P(e^s_ij|O), the noisy-or model <cit.> can be used as suggested by <cit.>, where each observation o_l∈{o_1, … , o_n} is considered as independent with each other and parameterised as λ_l∈{λ_1, … , λ_n}. Moreover, there is a leak parameter λ_0 to capture the probability that the sensitive edge is there due to other unmodeled reasons. Hence, the probability of a sensitive edge is expressed as follows.
P(e^s_ij = 1 | o_1, …, o_n) = 1 - ∏_l=0^n(1- λ_l)
where s in e^s_ij is the indicator of sensitive relationship, and the details of fitting the values of λ_l can be found in <cit.>.
Randomization-based Posterior Probability <cit.>. To identify a link, this observation is based on randomizing the published graph G' and counting the possible connections over a target pair of nodes i and j. And those countings are utilized for the posterior probability to determine whether there is a link between nodes i and j in the original graph G. Formally, the posterior probability for identifying the link e_ij in the original graph G is expressed as follows.
P(e_ij = 1 | G'_s) = 1/N∑^N_s=11 (G'_s(i,j) == 1)
where the attacker applies a certain randomization mechanism on the published graph G' N times to get a sequence of G'_s, and s ∈{1, …, N}. In each G'_s, if there is an edge connects the target nodes i and j, then the indicator function 1 (G'_s(i,j) == 1) will count one.
§.§ Privacy-Preserving Mechanisms
Here, we discuss some privacy-preserving techniques that are deliberately designed for specific attackers and also some general protection techniques that are not targeting attackers but can be widely applied.
§.§.§ Protection Mechanism Designed for Node Identity Dislosure
In general, the protection mechanisms are proposed to enlarge the scope of candidates of victims, i.e., reduce the uniqueness of victims in the anonymized graphs.
k-degree Anonymization <cit.>. The motivation for k-degree anonymization is that degree distribution is highly skewed in real-world graphs, such that it is usually effective to collect the degree information (as the background knowledge) to identify a target node. Therefore, this protection mechanism aims to ensure that there at least exist k-1 nodes in the published graph G', in which k-1 nodes share the same degree with any possible target node x. In this way, it can largely prevent the node identity disclosure even if the adversary has some background knowledge about degree distribution. To obtain such anonymized graph G', the method is two-step. First, for the original graph G with n nodes, the degree distribution is encoded into a n-dimensional vector 𝐝, where each entry records the degree of an individual node; And then, based on 𝐝, the authors proposed to create a new degree distribution 𝐝', which is k-anonymous with a tolerated utility loss (e.g., isomorphism cost) instanced by the L_1 distance between two vectors 𝐝 and 𝐝'. Second, based on the k-anonymous degree vector 𝐝', the authors proposed to construct a graph G' whose degree distribution is identical to 𝐝'.
k-degree Anonymization in Temporal Graphs <cit.>. For temporal graphs (i.e., graph structures and attributes are dependent on time <cit.>), this method aims to ensure that the temporal degree sequence of each node is indistinguishable from that of at least k-1 other nodes. On the other side, this method also tries to preserve the utility of the published graph as much as possible. To achieve the k-anonymity, the proposed method first partition n nodes in the original temporal graph G into m groups using k-means based on the distance of temporal degree vectors 𝐝 of each node, which is a T-dimensional vector records the degree of a node at different timestamp t.To realize the utility, constrained by the cluster assignment, the method refines 𝐝 of each node into 𝐝' while minimizing the L_1 distance between matrices 𝐃 and 𝐃' (which are stacks of 𝐝 and 𝐝'). After that, the anonymized temporal graph G' is constructed by 𝐃' to release for each timestamp individually.
k-degree Anonymization in Knowledge Graphs <cit.>. Different from the ordinary graph, the knowledge graph has rich attributes on nodes and edges <cit.>. Therefore, the k-degree is upgraded with the k-attributed degree that aims to ensure a target node in the anonymized knowledge graph has k-1 other nodes who share the same attributes (i.e., node level) and degree (i.e., edge level) <cit.>. Then the k-degree anonymization solution gets upgraded in <cit.>, which aims to solve the challenge when the data provider wants to continually publish a sequence of anonymized knowledge graphs (e.g., the original graph needs to update and so the anonymized does). Then, in <cit.>, the k-ad (short for k-attributed degree) is extended to k^ω-ad, which targets to defend the node identity disclosure in the ω continuous anonymized versions of a knowledge graph. The basic idea is to partition nodes into clusters based on the similarity of node features and degree; Then, for the knowledge graph updates (like newly inserted nodes or deleted nodes), manual intervention is applied (e.g., adding fake nodes) to ensure the k^ω anonymity; Finally, the anonymized knowledge graph gets recovered from the clusters. This initial idea <cit.> gets further formalized and materialized in <cit.>.
k-neighborhood Anonymization <cit.>. This protection is proposed to defend the node identity disclosure when the adversary comprehends the background knowledge about neighborhood relationships of a target node (i.e., Neighborhood Relationship Queries discussed in Subsection 2.2.1). The core idea is to insert nodes and edges in the original graph G to get an anonymized graph G', such that a target node x can have multiple nodes whose neighborhood structure is isomorphic in G'. Given a pair node v and u in graph G (suppose node v is the target), the authors first propose the neighborhood component and use DFS search to encode the ego-net Neighbor_G(v) and Neighbor_G(u) into vectors. Then, by comparing the difference between Neighbor_G(v) and Neighbor_G(u), the authors then greedy insert missing (labeled) nodes and edges (into Neighbor_G(v) or Neighbor_G(u)) to make Neighbor_G(v) and Neighbor_G(u) isomorphic. Those inserted nodes and edges make G into G'.
k-automorphism Anonymization <cit.>. This method is proposed for the structural queries by attackers, especially for the subgraph queries (as discussed in Subsection 2.2.1). Basically, given an original graph G, this method produces an anonymization graph G' to publish, where G is the sub-graph of G' and G' is k-automorphic. To do this, the authors propose the KM algorithm, which partitions the original graph G and adds the crossing edge copies into G, to further convert G into G'. Hence, the G' can satisfy the k-different match principle to defend the subgraph query attacks, which means that there are at least k-different matches in G' for a subgraph query, but those matches do not share any nodes.
§.§.§ Protection Mechanism Designed for Link Re-Identification
The general idea of solutions here is proposed to reduce the confidence of attackers (which usually can be realized by a probabilistic model) for inferring or predicting links based on observing the published anonymized graphs.
Intact Edges <cit.>. This solution is straightforward and trivial. Given the link re-identification attacker aims to predict a target link between two nodes, and the corresponding link type (i.e., edge type) is denoted as s, then the intact edges strategy is to remove all s type edges in the original graph G and publish the rest as the anonymized graph G'. Those remaining edges are so-called intact.
Partial-edge Removal <cit.>. This approach is also based on removing edges in the original graph G to publish the anonymized graph G'. Partial-edge removal does not exhaustively remove all sensitive (indexed by s type) edges in G, but it removes part of existing edges. Those removed existing edges are selected based on the criteria of whether their existence contributes to the exposure of sensitive links, e.g., they are sensitive edges, they connect high-degree nodes, etc. Even those removals can be selected randomly.
Cluster-edge Anonymization <cit.>. This method requires that the original graph G can be partitioned into clusters (or so-called equivalence classes) to publish the anomymized graph G'. The intra-cluster edges are removed to aggregate a cluster into a supernode (i.e., the number of clusters in G is now the number of nodes in G'), but the inter-cluster edges are reserved in G'. To be more specific, for each edge whose edge type is not sensitive (i.e., not s type), if it connects any two clusters, it will be reserved in G'; otherwise, it will be removed. It can be observed that this method needs the clustering pre-processing, which also means that it can cooperate with the node anonymization method. For example, the k-anonymization <cit.> can be applied on the original graph G first to identify the equivalence classes, i.e., which nodes are equivalent in terms of k-anonymization (for example, nodes who have the same degree).
Cluster-edge Anonymization with Constraints <cit.>. This method is the upgraded version of the previous cluster-edge anonymization, and it is proposed to strengthen the utility of the anonymized graph G' by adjusting the edges between clusters (i.e., equivalence classes). The core idea is to require the equivalence class nodes (i.e., cluster nodes or supernodes in G') to have the same constraints as any two nodes in the original graph G. For example, if there can be at most two edges of a certain type between nodes in G, there can be at most two edges of a certain type between the cluster nodes in G'.
§.§.§ General Privacy Protection Mechanisms
Besides the protections that are designed deliberately for the node identity disclosure and link re-identification risks, there are also other protection mechanisms that are not designed for a specific kind of attacker but for the general and comprehensive scenario, such as randomized mechanisms with constraints and differential privacy schema. Next, we will discuss these research works.
Graph Summarization <cit.>. This method aims to publish a set of anonymized graphs G' given an original graph G, through the graph summarization manner. To be specific, this method relies on a pre-defined partitioning method to partition the original graph G into several clusters, then each cluster will just serve as a node in the anonymized graph G'. The selection of connecting nodes in G' results in the variety of G', which means that a sequence of G' will appear with a different edge connecting strategy. The detailed connection strategy can be referred to <cit.>.
Switching-based Graph Generation <cit.>. Here, the authors aim to publish the anonymized graph G' that should also preserve the utility of the original graph G. Therefore, they propose the graph generation method based on the switching operations that can preserve the graph features. Moreover, the switching is realized in an iterative Monte Carlo manner, each time two edges (a, b) and (c, d) are selected. Then they will switch into (a, d) and (b, c) or (a, c) and (b, d). The authors constrain that two selected edges are switchable if and only if the switching generates no more edges or self-edges, such that the overall degree distribution will not change. After sufficient Monte Carlo switching operations, the authors show that the original graph features (e.g., eigenvalues of adjacency matrix, eigenvectors of Laplacian matrix, harmonic mean of geodesic path, and graph transitivity) can be largely preserved in the anonymized graph G'.
Spectral Add/Del and Spectral Switch <cit.>. The idea of this method starts from Rand Add/Del and Rand Switch. Rand Add/Del means that the protection mechanism randomly adds an edge after deleting another edge and repeats multiple times, such that the total number of edges in the anonymized graph will not change. Rand Switch is the method that randomly switches a pair of existing edges (t,w) and (u, v) into (t,v) and (u,w) (if (t,v) and (u,w) do not exist in the original graph), such that the overall degree distribution will not change. In <cit.>, the authors develop the spectrum-preserving randomization methods Spectral Add/Del and Spectral Switch, which preserve the largest eigenvalue λ_1 of the adjacency matrix 𝐀 and the second smallest eigenvalue μ_2 of the Laplacian matrix 𝐋 = 𝐃 - 𝐀. To be specific, the authors first investigate which edges will cause the λ_1 and μ_2 increase or decrease in the anonymized graph and then select the edges from different categories to do Rand Add/Del and Rand Switch to control the values of λ_1 and μ_2 not change too much in the anonymized graph.
RandWalk-Mod <cit.>. This method aims to inject the connection uncertainty by iteratively copying each existing edge from the original graph G to an initial null graph G' with a certain probability, guaranteeing the degree distribution of G' is unchanged compared with G. Starting from each node u in the original graph G, this method first gets the neighbor of node u in G denoted as 𝒩_u. Then for each node in 𝒩_u, this method runs multiple random walks and denotes the terminated node in each walk as z. Finally, RandWalk-Mod adds the edge (u,z) to G' with certain probabilities under different conditions (e.g., 0.5, a predefined probability α, or 0.5d_u - α/d_u-1, where d_u is the degree of node u in G).
Next, we introduce an important component in the graph privacy-preserving techniques, i.e., differential privacy <cit.>. The general idea of differential privacy is that two adjacent graphs (e.g., one node/edge difference between two graphs) are indistinguishable through the permutation algorithm ℳ. Then, this permutation algorithm ℳ satisfies the differential privacy. The behind intuition is that the randomness of ℳ will not make the small divergence produce a considerably different distribution, i.e., the randomness of ℳ is not the cause of the privacy leak. If the indistinguishable property is measured by ϵ, then the algorithm is usually called ϵ-differential privacy algorithm. The basic idea can be expressed as follows.
Pr[ℳ(G) ∈ S]/Pr[ℳ(G) ∈ S]≤ e^ϵ
where G and G' are adjacenct graphs, ℳ is the differential privacy algorithm, and ϵ is the privacy budget. The above equation illustrates that the probability of the same output range is almost equivalent.
Within the context of graph privacy, the differential privacy algorithm can be roughly categorized as edge-level differential privacy and node-level differential privacy. Given the input original graph G, the output graph of the differential algorithm ℳ(G) can be used as the anonymized graph G' to publish.
Edge-level Differential Privacy Graph Generation. We first introduce the edge-level differential privacy algorithms, which means that the privacy algorithm can permute adjacent graphs (e.g., one edge difference) indiscriminately.
* DP-1K and DP-2K Graph Model <cit.>. This edge-level differential privacy algorithm is proposed with the utility preserving concern of complex degree distribution. Here, 1k-distribution denoted by P_1(G) is the ordinary node degree distribution in graph G, e.g., the number of nodes having 1 degree is 10 then P_1(1) = 10, the number of nodes having 2 degrees is 5 then P_1(2) = 5, etc. 2K-distribution denoted by P_2(G) is the joint graph distribution in graph G, e.g., the number of edges connecting an i-degree node and a j-degree node, with iterating i and j. And P_2(2,3) = 6 means that the number of edges in G connecting a 2-degree node and a 3-degree node is 6. Hence, DP-1K (or DP-2K) Graph Model first computes the 1K-(or 2K-) degree distribution P_1(G) (or P_2(G)) and then permutes the degree distribution under the edge-level DP to obtain the P_1(G)' (or P_2(G)'). Finally, an off-the-shelf graph generator (e.g., <cit.>) is called to build the anonymized graph G' based on P_1(G)' (or P_2(G)').
* Local Differential Privacy Graph Generation (LDPGEN) <cit.> is motivated by permuting the connection distribution, i.e., proportionally flipping the existing edge to non-existing and vice versa. To make the generated graph preserve the original utility, LDPGEN <cit.> first partitions the original graph G into the disjoint clusters and adds Laplacian noise on the node's degree vector in each cluster, which guarantees the local edge-level differential privacy. After that, the estimator is used to estimate the connection probability of intra-cluster edges and inter-cluster edges based on the noisy degree vectors, such that the anonymized graph G' is generated.
* Differentially Private Graph Sparsification <cit.>. On the one hand, this method constrains the number of edges in the anonymized graph G' is less than the original graph G to a certain extent. On the other hand, the method requires that the Laplacian of the anonymized graph G' is approximated to the original graph G (i.e., see Eq.1 in <cit.>). The two above objectives are unified into an edge-level differential privacy framework. The new graph G' is then obtained by solving an SDP (i.e., semi-definite program) problem.
* Temporal Edge-level Differential Privacy. In <cit.>, two temporal graphs are adjacent if they only differ in one update (i.e., the existence and non-existence of a temporal edge, different weights of an existing temporal edge). Based on the Priv-Graph algorithm (i.e., adding noise to graph Laplacian matrix), Sliding-Priv-Graph <cit.> is proposed to (1) take recent updates and ensure the temporal edge-level differential privacy and (2) meet the smooth Laplacian property (i.e., the positive semi-definite of consecutive Laplacian matrices). Moreover, in <cit.>, the authors distinguish the edge-adjacency and node-adjacency in the temporal graphs. Two temporal graphs are node-adjacent (or edge-adjacent) if they only differ in one node (or edge) insertion or deletion.
* Deep Graph Models with Differential Privacy. Following the synergy of deep learning and differential privacy <cit.>, another way to preserve privacy is targeting the gradient of deep graph learning models. In <cit.>, a deep graph generative model called DPGG_AN is proposed under the edge-level differential privacy constraints, where the privacy protection mechanism is executed during the gradient descent phase of the generation learning process, by adding Gaussian noise to the gradient of deep learning models.
Node-level Differential Privacy Graph Generation. Compared with edge-level differential privacy, node-level differential privacy is relatively difficult to be formalized and solve. In <cit.>, authors contribute several theoretical node-level differential privacy solutions such as Flow-based Lipschitz extension and LP-based Lipschitz extensions. But they all focus on realizing part of the graph properties instead of the graph data itself, such as anonymized degree distribution, subgraph counting, etc. The same kind of research flavor also appeared in relevant node-level differential privacy works like <cit.>. Again, differential privacy mechanisms on graphs is a large and comprehensive topic, a more detailed introduction and extensive literature review can be found in <cit.>.
§.§ Other Aspects of Graph Anonymization
Here, we would also like to review several graph anonymization techniques, but the difference from the majority mentioned above is that: they are not publishing the anonymized graph G' but anonymize some non-trivial and graph statistics of the original graph G and release them to the public <cit.>. The central requirement for protecting the graph statistics is that some scalar graph parameters are essential to describe the graph topology (e.g., degree distributions) or even reconstruct the graph topology (e.g., the number of nodes and edge connection probability in the Erdos-Renyi graph). To this end, some methods focus on protecting the important graph parameters and their statistics before releasing them. For example, the spectrum of a graph (i.e., eigen-decomposition of the graph Laplacian matrix) can preserve many important graph properties such as topological connections, low-pass or high-pass graph single filters, etc. Therefore, in <cit.>, the authors proposed to permute the eigen-decomposition under the differential privacy and then release the permuted parameters. To be specific, given the original eigenvalues and eigenvectors, certain calibrated random noises are sampled and added to them under the differential privacy constraint. Under the same protection mechanism, i.e., differential privacy, the protection goal is set to be the number of occurrences of subgraphs in <cit.>, the sequence of degree distribution in directed graphs and undirected graphs in <cit.>, and the edge connection probability of random graphs in <cit.>.
§.§ Challenges and Future Opportunities
After introducing different graph anonymization techniques, we would like to share some open questions and corresponding challenges.
§.§.§ Preserving Privacy for Temporal Graphs
As discussed above, most privacy-preserving graph anonymization methods still consider the input graphs as static. However, in complex real-world scenarios, the graphs are usually evolving over time <cit.>, which brings critical challenges to the current privacy-preserving static graph generation process. In other words, the time domain enriches the node attribute dimension and may also dictate the attribute distribution, which leads to increased exposure risk. For example, some graphs contain multiple dynamics and accurately representing them could contribute to graph tasks like classification <cit.>. But, the existence of various dynamics increases the probability of being unique and enlarges the leaking risk.
§.§.§ Preserving Privacy for Heterogeneous Graphs
During the node identity disclosure and link re-identification, it can be observed that the majority of background knowledge is solely from structural queries, which is already forceful enough. In heterogeneous graphs <cit.>, the abundant node and edge features increase the risk of leaking sensitive information and bring challenges to protection mechanisms, especially the heterogeneous graphs start to evolve <cit.>.
To the best of our knowledge, how to generate privacy-preserving heterogeneous or temporal graphs remains open.
* What kind of feature information is sensitive in heterogeneous or time-evolving graphs and should be hidden in the generated graph?
* If the corresponding sensitive information is determined, what techniques are effective for protecting structures and features in the heterogeneous or time-evolving environment?
* Last but not least, if the corresponding protection mechanism is designed, how to maintain the generation utility simultaneously with privacy constraints?
§ GRAPH DATA PRIVACY-PRESERVING COMPUTATION
In recent years, graph machine learning has become increasingly popular due to the abundance of graph-structured data in various domains, such as social networks, recommendation systems, and bioinformatics. However, graph data is usually distributed in multiple data sources, and each data owner does not have enough data to train satisfactory machine learning models, which require a massive amount of graph data. For example, biochemical industries may wish to collaboratively train a graph neural network model to predict the property of molecules. While we introduce one solution with privacy-preserving graph data generation in the last section, another solution is to enable multi-party computation without exchanging raw data. In this section, we introduce federated learning (FL) <cit.>, a machine learning system where multiple clients (i.e., data owners) collaboratively train machine learning models without exchanging their raw data. In particular, we first introduce the framework of federated learning and its applications with graph data in Subsection <ref>. Then we introduce important FL algorithms under three representative graph federated learning scenarios: graph-level FL (Subsection <ref>), subgraph-level FL (Subsection <ref>), and node-level FL (Subsection <ref>). Finally, we summarize the challenges of future opportunities of graph FL in Section <ref>.
§.§ Framework and Applications of Federated Learning
Federated learning (FL) <cit.> is a distributed learning system where multiple clients (i.e., data sources) collaborate to train a machine learning model under the orchestration of the central server (i.e., the service provider), while keeping their data decentralized and private <cit.>. This subsection provides an exposition on the FL framework, followed by an overview of the application of federated learning on graph data.
§.§.§ Federated Learning Framework
A typical FL framework has one central server and N clients, each with its own dataset 𝒟_i. The main steps can be summarized as follows:
* Parameter broadcasting. The server broadcasts the current global model to (selected) clients.
* Local update. Each client locally trains its local model.
* Parameter uploading. Each client sends upload the model update back to the server.
* Model aggregation. The server aggregates the model updates collected from clients and updates the global model.
* Repeat: Steps 1-4 are repeated for multiple communication rounds until the global model converges to satisfactory performance.
One of the most popular FL algorithms is FedAvg <cit.>. In each communication rounds, the server randomly selects a subset of clients, and broadcasts the global model to them. Each client locally updates the model with multiple iterations of stochastic gradient descent, and uploads its local model back to the server. Finally, the server computes a weighted average of local model parameters, and updates the global model parameters. Algorithm <ref> gives the pseudo-code of FedAvg. Notice that in FedAvg, local data never leaves the client side. Besides FedAvg, most of the FL algorithms strictly follow the aforementioned training protocol <cit.>, or roughly follow it with a few modifications <cit.>.
FL protects client privacy in two main ways. Firstly, instead of transmitting the raw data, FL transmits only the model parameters, which are updated based on the local data of each client. By doing so, FL ensures that sensitive data remains on the client's device and is not transmitted to the central server and other clients. Secondly, the model parameters uploaded to the server only reveal the distribution of local data, rather than individual data points. This approach helps to maintain privacy by obscuring the specific data points used to train the model.
FL can be equipped with differential privacy mechanisms <cit.> to enhance privacy protection. As described in the last section, differential privacy is a technique that involves adding noise to data in order to obscure individual contributions while still maintaining overall data patterns. However, different from graph generation, where the noise is added to the data (e.g., node feature, edges, etc), in the context of FL, the noise is added to the uploaded and downloaded model parameters. This ensures that even if an attacker were to obtain the model parameters, they would not be able to accurately infer the raw data from the model parameter. By adding moderate noise to the parameters, the model's accuracy may be slightly reduced, but the overall performance remains comparable to non-private models. In summary, by using differential privacy mechanisms, FL can achieve even better privacy protection by making it harder for attackers to identify the sensitive data contributed by individual clients.
§.§.§ Application of Graph Federated Learning
In this part, we introduce important applications of federated learning on graph data. Roughly, we survey three representative application scenarios: graph-level FL, subgraph-level FL, and node-level FL.
* Graph-level FL: Each client has one or several graphs, while different graphs are isolated and independent. One typical application of graph-level FL is for drug discovery <cit.>, where biochemical industries collaborate to train a graph neural network model predicting the property of molecules. Each molecule is a graph with basic atoms as nodes and chemical bonds as edges.
* Subgraph-level FL: Each client has one graph, while each graph is a subgraph of an underlying global graph. One representative application of subgraph-level FL is for financial transaction data <cit.>. Each FL client is a bank that keeps a graph encoding the information of its customers, where nodes are individual customers and edges are financial transaction records. While each bank holds its own graph, customers in one bank may have connections to customers in another bank, introducing cross-client edges. Thus, each bank's own graph is a subgraph of an underlying global graph.
* Node-level FL: Each client is a node of a graph, and edges are the pairwise relationships between clients, e.g., their distribution similarity or data dependency. One example is the smart city, where clients are traffic sensors deployed on the road and linked to geographically adjacent sensors. While clients form a graph, each client can make an intelligent decision based on the collected road conditions and nearby devices.
Figure <ref> illustrates the three application scenarios above. Next, we investigate each application scenario in the following three subsections individually.
§.§ Graph-level FL
In this subsection, we investigate graph-level FL. Graph-level FL is a natural extension of traditional FL: while each client has one or several graphs, different graphs are isolated and independent. The goal of each client is to train a graph neural network (GNN) model for a variety of local tasks, e.g., node-level (e.g., node classification), link-level (e.g., edge prediction), or graph-level (e.g., graph classification).
One of the most representative applications of graph-level FL is drug discovery, where graphs are molecules with atoms as nodes and chemical bonds as edges. Each FL client can be a pharmaceutical corporation that owns molecule data. Multiple corporations collaborate to train better model for molecular property prediction.
The biggest challenge of graph-level FL is the non-identical distribution among different clients' data. Since each client in FL collects their local data individually, their local datasets usually have a different distribution. For example, different pharmaceutical corporations may focus on different types of molecules. Such heterogeneity among clients' data distributions introduces optimization challenges to FL. Moreover, when clients' distribution is largely different, it might be harmful or even impossible to train one universal global model across all clients. More sophisticated techniques are required to achieve beneficial collaboration.
Next, we will introduce algorithms for graph-level FL in two parts: global federated learning and personalized federated learning. Since graph-level FL is a natural extension of traditional FL, we will cover both general FL algorithms and graph FL algorithms.
§.§.§ Global Federated Learning
Global federated learning (GFL) aims to train a shared global model for all clients. FedAvg <cit.> provides an initial solution for training GNNs with isolated graphs from multiple clients. However, when clients have significantly different underlying distributions, FedAvg needs much more communication rounds for convergence to a satisfactory model, and may converge to a sub-optimal solution <cit.>. This phenomenon of worse convergence is usually explained by weight divergence <cit.>, i.e, even with the same parameter initialization, the model parameters for different clients are substantially different after the first local stochastic gradient descent (SGD) step. With different model parameters, the mean of client gradients can be different from the gradient in centralized SGD, and introduce error to the model loss <cit.>.
Data-sharing.
To tackle the non-IID challenge to FL optimization, a simple but effective method is to share a small amount of data among clients. <cit.> first explore an association between the weight divergence and the non-IIDness of the data, and propose a method to share a small amount of data among the server and all clients. As a result, the accuracy can be increased by 30% for the CIFAR-10 dataset <cit.> with only 5% globally shared data. <cit.> further improves the privacy of this approach by sharing the average of local data points, instead of raw data. Specifically, each client uploads averaged data, receives averaged data from other clients, and performs Mixup <cit.> data augmentation locally to alleviate weight divergence. However, both methods require modification of the standard FL protocol and transmission of data. Another way to improve privacy is to share synthetic data generated by generative adversarial networks (GANs) <cit.>, instead of the raw data. The synthetic data can be a collection of each client's synthetic data generated with local GANs or generated with one global GAN trained in FL <cit.>. However, it is unclear whether GAN can provide enough privacy, since it may memorize the training data <cit.>.
Modifying local update.
Another line of research works modifies the local update procedure to alleviate weight divergence without changing the communication protocol of FL. FedProx <cit.> adds a proximal term to the local objective to stabilize the training procedure. The proximal term is the squared L2 distance between the current global model and the local model, which prevents the local model from drifting too far from the global model. SCAFFOLD <cit.> estimates how local updates deviate from the global update, and it then corrects the local updates via variance reduction. Based on the intuition that the global model can learn better representation than local models, MOON <cit.> conducts contrastive learning at the model level, encouraging the agreement of representation learned by the local and global models.
§.§.§ Personalized Federated Learning
While the aforementioned algorithms can accelerate the model optimization for GFL, one model may not always be ideal for all participating clients <cit.>. Recently, personalized federated learning (PFL) has been proposed to tackle this challenge. PFL allows FL clients to collaboratively train machine learning models while each client can have different model parameters.
Clustered FL.
In clustered FL, clients are partitioned into non-overlapping groups. Clients in the same group will share the same model, while clients from different groups can have different model parameters. In IFCA <cit.>, k models are initialized and transmitted to all clients in each communication round, and each client picks the model with the smallest loss value to optimize. FedCluster <cit.> iteratively bipartition the clients based on their cosine similarity of gradients. GCFL <cit.> generalizes this idea to graph data, enabling collaborative training with graphs from different domains. Observing that the gradients of GNNs can be fluctuating, GCFL+ <cit.> uses a gradient sequence-based clustering mechanism to form more robust clusters.
Personalized Modules.
Another prevalent way for PFL is personalized modules. In these works, the machine learning model is divided into two parts: the shared part and the personalized part. The key is to design a model structure suitable for personalization. For example, when a model is split into a feature extractor and classifier, FedPer <cit.> shares the feature extractor and personalizes the classifier, while LG-FedAvg <cit.> personalizes the feature extractor and shares the classifier. Similar techniques in used in FMTGL <cit.> and NGL-FedRep <cit.>. Moreover, PartialFed <cit.> can automatically select which layers to personalize and which layers to share. On graph data, <cit.> observe that while the feature information can be very different, some structural properties are shared by various domains, revealing the great potential for sharing structural information in FL. Inspired by this, they propose FedStar that trains a feature-structure decoupled GNN. The structural encoder is globally shared across all clients, while the feature-based knowledge is personalized.
Local Finetuning and Meta-Learning.
Finetuning is widely used for PFL. In these works, a global model is first trained with all clients. The global model encodes the information of the population but may not adapt to each client's own distribution. Therefore, each client locally finetunes the global model with a few steps of gradient descent. Besides vanilla finetuning, Per-FedAvg <cit.> combines FL with MAML <cit.>, an algorithm for meta-learning, to improve the performance of finetuning. Similarly, pFedMe <cit.> utilize Moreau Envelopes for personalization. It adds a proximal term to the local finetuning objective, and aims to find a local model near the global model, with just a few steps of gradient descent. GraphFL <cit.> applies a similar meta-learning framework on graph data, addressing the heterogeneity among graph data and handling new label domains with a few new labeled nodes.
Multi-task Learning.
PFL is also studied within the framework of multi-task learning. MOCHA <cit.> uses a matrix to model the similarity among each pair of clients. Clients with similar distribution will be encouraged to have similar model parameters. FedGMTL <cit.> generalizes this idea to graph data. Similarly, SemiGraphFL <cit.> computes pairwise cosine similarity among clients' hidden representations. As a result, clients with more similar data will have greater mutual influence. However, it requires the transmission of hidden representation. FedEM <cit.> assumes that each client's distribution is a mixture of unknown underlying distributions and proposes FedEM, an EM-like algorithm for multi-task FL. Finally, FedFOMO <cit.> allows each client to have a different mixture weight of local models during the aggregation steps. It provides a flexible way for model aggregation.
Graph Structure Augmentation.
In the previous works, graph structures are considered as ground truth. However, graphs can be noisy or incomplete, which can hurt the performance of GNNs. To tackle incomplete graph structures, FedGSL <cit.> optimizes the local client's graph and GNN parameters simultaneously.
§.§ Subgraph-level FL
Similar to graph-level FL, each client in subgraph-level FL holds one graph. However, clients' graphs are a subgraph of a latent large entire graph. In other words, there are cross-client edges in the entire graph, where the two nodes of these edges belong to different clients. The task is usually node-level, while the cross-client edges can contribute to the task.
One application of subgraph-level FL is financial fraud detection. Each FL client is a bank aiming to detect potential fraud with transaction data. Each bank keeps a graph of the information of its customers, where nodes are individual customers and edges are transaction records. While each bank holds its own graph, customers in one bank may have connections to customers in another bank, introducing edges across clients. These cross-client edges help to train better ML models.
The biggest challenge for subgraph-level FL is to handle cross-client edges. In GNNs, each node iteratively aggregates information from its neighboring nodes, which may be from other clients. However, during local updates in traditional FL, clients cannot get access to the data from other clients. Directly exchanging raw data among clients is prohibited due to privacy concerns. It is challenging to enable cross-client information exchange while preserving privacy. Moreover, when nodes' identities are not shared across clients, the cross-client edges can be missing and stored in none of the clients. Even if we collect clients' local subgraphs, we cannot reconstruct the global graph.
In this subsection, we will mainly focus on two scenarios. In the first part, we introduce algorithms when the hidden entire graph is given but stored separately in different clients. In the second part, we consider a more challenging setting: the cross-client edges are missing, and we cannot simply concatenate local graphs to reconstruct the entire graph losslessly. We focus on how to generate these missing edges or missing neighbors for each node.
§.§.§ Cross-client Propagation
When the cross-client edges are available, the major challenge is to enable cross-client information propagation without leaking raw data. FedGraph <cit.> designs a novel cross-client convolution operation to avoid sharing raw data across clients. It avoids exchanging representations in the first GCN layer. Similarly, FedPNS <cit.> control the number of neighbor sampling to reduce communication costs.
FedCog <cit.> proposes graph decoupling operation, splitting local graph to internal graph and border graph. The graph convolution is accordingly divided into two sequential steps: internal propagation and border propagation. In this process, each client sends the intermediate representation of internal nodes to other clients.
Considering that directly exchanging feature representations between clients can leak private information. In user-item graphs, FedPerGNN <cit.> design a privacy-preserving user-item graph expansion protocol. Clients upload encrypted item IDs to the trusted server, and the server matches the ciphertexts of item IDs to find clients with overlapping item IDs. DP-FedRec <cit.> uses private set intersection to exchange the edges information between clients and applies differential privacy techniques to further protect privacy.
Different from the above methods, FedGCN <cit.> does not rely on communication between clients. Instead, it transmits all the information needed to train a GCN between the server and each client, only once before the training. Moreover, each node at a given client only needs to know the accumulated information about the node's neighbors, which reduces possible privacy leakage.
§.§.§ Missing Neighbors
For some applications, the cross-client edges can be missing or not stored in any clients. Notice that although each client also holds a disjoint graph in graph-level FL, graph-level FL and subgraph-level FL with missing neighbors are substantially different. For graph-level FL, there are essentially no cross-client edges. For example, there are no chemical bonds between two molecules from different corporations' datasets. However, for subgraph-level FL, the cross-client edges exist, but are missing in certain applications. We may get suboptimal GNN models if ignoring the existence of cross-client edges. Therefore, the major challenge is to reconstruct these missing edges, or reconstruct missing neighbors for each node.
FedSAGE <cit.> first defines the missing neighbors' challenge, and proposes a method the generate pseudo neighbors for each node. It uses existing subgraphs to train a neighbors generator and generate one-hop neighbors for each client to mend the graph. Since missing neighbors are generated locally, no feature exchange is required between clients after the local subgraphs are mended. However, the training of neighbor generators requires cross-client hidden representation exchanges. Similarly, FedNI <cit.> uses a graph GAN model to generate missing nodes and edges.
§.§ Node-level FL
The final application scenario of graph federated learning is node-level. Different from the aforementioned two scenarios, each client in node-level FL can hold any type of data, not restricted to graphs. Instead, the clients themselves are nodes in a graph, while the edges are their pairwise relationship of communication or distribution similarity.
One typical application of node-level FL is the Internet of Things (IOT) devices in a smart building <cit.>. Due to bandwidth constraints, it can be costly for each IoT device to communicate with the central server. However, IoT devices in the same local area network can communicate very efficiently. As a result, IoT devices form a graph with pairwise communication availability as edges. Another application is for the smart city <cit.>, where clients are traffic sensors deployed on the road and linked to geographically adjacent sensors. Each device can collect data and make the real-time decision without waiting for the response of cloud servers. Each device needs to make an intelligent decision based on the collected road conditions and nearby devices.
In this subsection, we will first introduce algorithms where the graph models communication constraints among clients. In these works, there is no central server, and clients can only exchange information along edges. Then, we will introduce algorithms where the graph models the relationship between clients' distributions. In these works, although a central server is available, the graph among clients models distributional similarity or dependency among clients, potentially contributes to the model performance.
§.§.§ Graph as Communication Network
Traditional FL relies on a central server to enable communication among clients. Each client trusts the central server and uploads their model update to the server. However, in many scenarios, a trusted central server may not exist. Even when a central server exists, it may be expensive for clients to communicate with the server. Therefore, serverless FL (a.k.a. peer-to-peer FL) has been studied to relieve communication constraints.
The standard solution for serverless FL is fully decentralized FL <cit.>, where each client only averages its model parameter with its neighbors. D-FedGNN <cit.> uses these techniques to train GNN models. SpreadGNN <cit.> generalizes this framework to personalized FL, where each client has non-IID data and a different label space.
§.§.§ Graph as Distribution Similarities
When the central server is available, a graph of clients may still be beneficial when it models distributional relationships among clients. When edges link clients with highly similar distributions, parameter sharing along edges can potentially improve the model performance for both clients. When edges link clients with data dependency, information exchange along edges can even provide additional features for inference.
FedGS <cit.> models the data correlations of clients with a data-distribution-dependency graph, and improves the unbiasedness of the client sampling process. Meanwhile, SFL <cit.> assumes a pre-defined client relation graph stored on the server, and the client-centric model aggregation is conducted along the relation graph’s structure. GraphFL <cit.> considers client-side information to encourage similar clients to have similar models. BiG-Fed <cit.> applies graph convolution on the client graph, so each client's prediction can benefit from its neighbors with highly correlated data. Finally, <cit.> designs a client sampling technique considers both communication cost and distribution similarity.
Finally, we summarize the official implementation of FL algorithms and useful repositories in Table <ref>.
§.§ Challenges and Future Opportunities
In this part, we present several limitations in current works and provide open problems for future research.
§.§.§ Model Heterogeneity for Graph-Level FL
In previous works of graph-level FL, although each FL client usually has different data distribution it is usually assumed that the model architecture is shared across all clients. However, the optimal architecture for different clients can be different. For example, a well-known issue in GNNs is the over-smoothing problem. When the number of graph convolutional layers is higher than the diameter of the graph, GNN models may learn similar representations for all nodes in the graph, which harms the model performance. When each FL clients hold a substantially different size of graphs, it is highly likely that the optimal depth of the GNN model is different for them.
§.§.§ Avoiding Cross-Client Transmission for Sub-graph-Level FL
Most of the previous subgraph-level FL algorithms highly rely on direct information exchange along cross-client edges. While such operations are natural variants of graph convolution, such operations also raise privacy concerns. Moreover, different from traditional FL where each client downloads aggregated model parameters that reveal the population, feature exchange along the edges can expose information about individuals. It would be beneficial if the cross-client transmission can be avoided without greatly degrading the model.
§ ENVISIONING
In this section, we analyze the current developments and limitations of privacy-preserving graph machine learning, and explain the necessity of combining them. In addition, we identify a number of unsolved research directions that could be addressed to improve the privacy of graph machine learning systems.
§.§ Limitation of Current Techniques
In the previous two sections, we introduced privacy-preserving graph data generation and computation, respectively. However, both techniques have their own limitations.
* For privacy-preserving graph generation, while it can provide good privacy protection for graph data, it also has a significant drawback on model utility. The privacy-preserving techniques applied during data generation are not designed for specific machine learning tasks and may influence the utility of the resulting model. For example, consider a graph with four nodes a, b, c, and d. The nodes a and b have a positive label, while c and d have a negative label. Switching the edges from (a, b), (c, d) to (a, c), (b, d) does not change the degree distribution of the graph, but it changes the graph from a homophilous graph to a heterophilous graph, i.e., edges are more likely to link two nodes with different labels. This change can harm the performance of many GNN models, which are designed to work well with homogeneous graphs <cit.>. It is important to consider the downstream machine learning tasks when designing privacy-preserving techniques for graph data.
* For privacy-preserving graph computation, while FL can avoid the transmission of raw data, it has been shown that transmitting raw model parameters or gradients may not provide enough privacy, as attackers can use the gradient or model update to reconstruct private data <cit.>. Moreover, many subgraph-level and node-level federated learning algorithms require the transmission of hidden representations, which can also leak private information. Therefore, protecting the raw data from being reconstructed is essential to federated learning systems.
§.§ Combination of Privacy-Preserving Graph Data Generation and Computation
To address the limitations of current privacy-preserving techniques, it is essential to combine privacy graph data generation with the graph federated learning frameworks, as shown in Figure <ref>. This approach can provide an effective solution to the privacy preservation issues of graph machine learning models.
Specifically, the generated synthetic data is used instead of the real data during the training process. This means that even if the transmitted information is decrypted, it is just from the generated synthetic data and not the real data. The synthetic data can be generated in such a way that it preserves the statistical properties of the original data while ensuring privacy preservation. This can be achieved using various techniques, including differential privacy, homomorphic encryption, and secure multi-party computation.
The combination of privacy graph data generation and graph federated learning frameworks has several benefits. First, it ensures privacy preservation during the training process by using synthetic data. Second, it enables the transfer of graph machine learning model parameters rather than embedding vectors or other information. This can improve the accuracy and efficiency of the model. Finally, it provides a robust defense against privacy attacks and reverse-engineering, as the transmitted information is just from the generated synthetic data and not the real data.
§.§ Future Directions
Combining privacy-preserving data generation and computation is a promising approach to protect individual privacy while maintaining model utility in machine learning. However, it also poses several challenges and possible future directions.
§.§.§ Distribution of Privacy Budget
When combining privacy-preserving data generation with computation, noises are added to both raw data and model parameters. However, it is still unclear how to distribute the privacy budget between data generation and computation in a way that optimizes the privacy-utility trade-off. In this approach, noises are added to the graph data during data generation and to the model parameters during data computation (i.e., federated learning), which results in an overall reduction in accuracy. However, while the privacy analysis for data generation is directly defined on the data space, the privacy analysis for federated learning requires transforming the change on parameter space back to data space. Such transformation requires estimating the sensitivity of a machine learning algorithm (i.e., how the change of a data point affects the learned parameters), which is only loosely bounded in current works <cit.>. A more precise analysis of privacy is required to better understand the impact of privacy budget allocation on the overall privacy-utility trade-off.
§.§.§ Parameter Information Disentanglement
Another future challenge when combining privacy-preserving data generation and computation is the disentanglement of task-relevant and task-irrelevant information. Currently, the noise added to the model parameters is isotropic, meaning that task-relevant and task-irrelevant information are equally protected. However, not all information is equally important for model utility. If we can identify which information has a significant influence on model performance, we can distribute more privacy budget to this information while allocating less privacy budget to task-irrelevant information. This can result in a better privacy-utility trade-off. Disentangling task-relevant and task-irrelevant information would require a more sophisticated analysis of model architecture and data characteristics to determine which features contribute most to model performance.
§ CONCLUSION
In this paper, we review the research for privacy-preserving techniques for graph machine learning from the data to the computation, considering the situation where the data need to be shared or are banned from being transmitted. To be specific, for privacy-preserving graph data generation techniques, we analyze the forceful attackers first and then introduce how corresponding protection methods are proposed to defend attackers. For the privacy graph data computation, we circle around the federated learning setting and discuss how the general federated learning framework applied to graph data and what the potential challenges originated from non-IIDness, and how the nascent research works address them. In the end, we analyze the current limitation and propose several promising research directions.
§ ACKNOWLEDGEMENTS
This work is supported by the National Science Foundation (1947203, 2117902, 2137468, 1947135, 2134079, and 1939725), the U.S. Department of Homeland Security (2017-ST-061-QA0001, 17STQAC00001-06-00, and 17STQAC00001-03-03), DARPA (HR001121C0165), NIFA (2020-67021-32799), and ARO (W911NF2110088). The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government.
abbrv
|
http://arxiv.org/abs/2307.10195v1 | 20230710200730 | ChatGPT for Digital Forensic Investigation: The Good, The Bad, and The Unknown | [
"Mark Scanlon",
"Frank Breitinger",
"Christopher Hargreaves",
"Jan-Niclas Hilgert",
"John Sheppard"
] | cs.CR | [
"cs.CR",
"cs.AI",
"cs.CL"
] |
add1]Mark Scanlonfirstcorr
[email protected]
[firstcorr]Corresponding author
[add1]Forensics and Security Research Group, School of Computer Science, University College Dublin, Ireland
add2]Frank Breitinger
[email protected]
[add2]School of Criminal Justice, University of Lausanne, Lausanne, Switzerland
add3]Christopher Hargreaves
[email protected]
[add3]Department of Computer Science, University of Oxford, United Kingdom
add4]Jan-Niclas Hilgert
[email protected]
[add4]Fraunhofer FKIE, Bonn, Germany
add5]John Sheppard
[email protected]
[add5]Department of Computing and Mathematics, South East Technological University, Waterford, Ireland
The disruptive application of ChatGPT (GPT-3.5, GPT-4) to a variety of domains has become a topic of much discussion in the scientific community and society at large. Large Language Models (LLMs), e.g., BERT, Bard, Generative Pre-trained Transformers (GPTs), LLaMA, etc., have the ability to take instructions, or prompts, from users and generate answers and solutions based on very large volumes of text-based training data. This paper assesses the impact and potential impact of ChatGPT on the field of digital forensics, specifically looking at its latest pre-trained LLM, GPT-4. A series of experiments are conducted to assess its capability across several digital forensic use cases including artefact understanding, evidence searching, code generation, anomaly detection, incident response, and education. Across these topics, its strengths and risks are outlined and a number of general conclusions are drawn. Overall this paper concludes that while there are some potential low-risk applications of ChatGPT within digital forensics, many are either unsuitable at present, since the evidence would need to be uploaded to the service, or they require sufficient knowledge of the topic being asked of the tool to identify incorrect assumptions, inaccuracies, and mistakes. However, to an appropriately knowledgeable user, it could act as a useful supporting tool in some circumstances.
ChatGPT Digital Forensics Artificial Intelligence Generative Pre-trained Transformers (GPT) Large Language Models (LLM)
§ INTRODUCTION
The emergence of Generative Artificial Intelligence (GAI) has sparked significant interest and scrutiny across various disciplines, including its potential impact on scientific research and writing <cit.>. In particular, Large Language Models (LLMs), such as ChatGPT – released in November 2022 (<openai.com/blog/ChatGPT>), have been identified as having numerous beneficial use cases and risks in various fields including digital forensic (DF) investigation <cit.>.
These encompass automated script generation, gaining technical or procedural knowledge, multilingual analysis, automated sentiment analysis, etc.
However, as LLMs are language models in the first instance, they are focused on generating an answer and do not always prioritise generating the correct answer. OpenAI state that ChatGPT's latest LLM from March 2023, GPT-4, “is not fully reliable (it hallucinates facts and makes reasoning errors)” and that “care should be taken when using the outputs of GPT-4, particularly in contexts where reliability is important” <cit.>.
Consequently, despite its potential, the use of AI models involves various risks.
For instance, some risks of using LLMs in digital forensics include:
training data biases/errors,
hallucinations,
legal and ethical concerns,
explainability/investigator over-reliance,
and technical limitations.
At the time of submission, there are no original research publications focused on the application of LLMs to the domain of digital forensics. This paper aims to assess the impact that ChatGPT could have, positive and negative, – specifically focusing on GPT-4. The contributions of this work can be summarised as follows:
0em
* GPT-4 is evaluated in various contexts, including learning about digital forensics topics, identifying relevant artefacts, assisting in searching for artefacts, generating code for forensic activities, detecting anomalies in log files, incident response, and creating storyboards for teaching scenarios.
* For each of these areas, it showcases both the risks and occasional benefits of the technology in its current state.
* Based on the results in these specific areas, the study draws general conclusions and proposes future directions for the utilisation of LLM-based AI in digital forensics.
The remainder of the paper is structured as follows: Section <ref> provides the background to the technology and an overview of the related work. The methodology is discussed in Section <ref>, followed by Sections <ref> to <ref> which provide a discussion of the focus areas for the included experimentation.
A discussion of the good, bad, and unknown can be found in Setion <ref>. Limitations of the work are highlighted in Section <ref>. The last section concludes the paper and points out future directions.
§ BACKGROUND
AI applications in digital forensics have predominantly centred around data classification and identification tasks, including network forensics, malware investigation, child sexual exploitation material investigation, facial recognition and biometric trait estimation, device triage, timeline reconstruction, and device fingerprinting <cit.>.
With the advancements of LLMs, new applications are possible.
§.§ Large Language Models
LLMs are built using neural networks with typically billions of parameters and corresponding weights, and are trained on very large quantities of unlabelled text. Generative pre-training is a long-established technique used in machine learning <cit.> whereby a Natural Language Processing (NLP) neural network based model is trained (unsupervised) to predict the next word in a sentence from a large corpus of text leveraging the statistical properties of the language itself and subsequently fine-tuned for a specific task. In 2017, Google released the Transformer architecture <cit.>, which uses an attention mechanism to weigh the importance of different words in understanding a piece of text. The Transformer architecture has proven successful in NLP tasks and was foundational for the first iterations of LLMs, including BERT in 2018 <cit.> and XLNet in 2019 <cit.> (both non-generative pre-trained transformers).
§.§ Generative Pre-trained Transformers
GPTs are one family of LLMs created by OpenAI in 2019, and are used as a framework for creating GAI applications. ChatGPT is a chatbot application built on top of OpenAI's GPT-3.5 and GPT-4. At the time of launch, ChatGPT exclusively used GPT-3.5, and continues to do so for the freely-accessible tier. Paid subscribers, or Plus members, have access to the GPT-4 model. Figure <ref> summarises the different performance characteristics between the two versions of GPT according to OpenAI.
In addition, GPT-4 also facilitates several additional plugins, including web browsing (live up-to-date data retrieval), code optimisation, etc., – made available through a limited alpha program. OpenAI does not declare much detail about GPT-4's architecture, model size, hardware, training compute, dataset construction, or training methods for commercial competitiveness reasons <cit.>.
§ METHODOLOGY
To assess the applicability of ChatGPT for digital forensic investigations, a selection of areas within this domain was identified. Although these domains do not provide full coverage of all possible uses of LLMs for digital forensic, they are representative.
They provide a variety of possible uses and are derived by considering existing uses of ChatGPT that have been discussed, e.g., code generation and creative writing <cit.>, and applying this in the context of digital forensics. In total, six representative areas have been identified.
For each topic area, a brief explanation is given, followed by a series of specific illustrative examples of conducted experiments. An experiment is defined as a conversation on a particular thread and consists of one or more prompts that were given to ChatGPT attempting to achieve a specific aim. All experiments were chosen to highlight the strengths, limitations, and dangers of the technology.
Example subsets of the experiments performed as part of this work are provided in the text of this paper. Since ChatGPT responses are non-deterministic given identical prompts, a static repository of the prompts used and corresponding responses can be found in a GitHub repository associated with this paper <https://github.com/markscanlonucd/ChatGPT-for-Digital-Forensics>. This repository has a folder structure corresponding to each of the experimentation sections of this paper, i.e., Sec. <ref> to <ref>.
The given answers were evaluated and validated to draw appropriate conclusions for each topic area. This was done based on fact-checking where possible, as well as the authors' experience in digital forensic processing, programming, and teaching. Each section concludes with a summary of these findings, from which general results are extrapolated and presented in Sec. <ref>.
§ CHATGPT FOR ARTEFACT IDENTIFICATION
Operating system artefacts are vital for investigators, as they provide valuable insights into the activities of a device, including communication history, data origins, and overall device usage. These artefacts not only help investigators tell a comprehensive story, but also serve as corroborating evidence.
§.§ File Downloads
ChatGPT was prompted for assistance in determining if a file had been downloaded to a Windows 10 PC by a particular user. The generated text highlighted several possible places to examine such as the associated metadata, the browser history, the user's downloads folder, the Windows Event logs, network logs as well as using the third-party tools EnCase, FTK, or X-Ways Forensics. The response also included a warning at the end, stating “Keep in mind that it's essential to follow proper forensic procedures and maintain a chain of custody to ensure that the evidence you gather is admissible in court”.
When the prompt was refined to state that the investigator suspected the file was downloaded through Skype rather than through a browser, ChatGPT refined its answer, specifying the location of the Skype conversation history database and the Skype downloaded file's location. It repeated the Windows Event logs, network logs and tools list but with more focus on Skype such as “Use Event Viewer to check for any relevant events related to Skype or file transfers during the timeframe in question” and “Look for Skype-related traffic, e.g., the IP address and ports used by Skype, and file transfer events”.
§.§ File Execution
A query was submitted about how to determine if a file had been executed on a Windows 10 machine by a particular user. The response to this focused on the Windows Security Event logs and the event ID 4688 for process creation, prefetch files, UserAssist registry keys and the NTFS filesystem metadata.
When asked “are there any other artefacts that I should consider” the prompt supplied the names of other artefacts such as the Windows Task Scheduler, LNK files, Shellbags, Windows PowerShell History, Windows Search Index (WSI), System Resource Usage Monitor, Browser History and Cache, and logs created by the operating system, applications, or security software. When prompted again with “are there any other artefacts that I should consider” it this time added Amcache, Shimcache, UserActivity cache, jumplists, network artefacts as well as looking at memory forensics, external devices and filesystem journaling. When it was again prompted with the same query, it presented more artefacts. Among these were links to tools that resolved in some instances, but in other cases produced 404 errors. Two examples of this included links to Eric Zimmerman tools called SuperFetch Parser and ShimDBExtractor neither of which are tools available on Zimmerman's GitHub page. Tools created by Zimmerman that are available with similar names are the prefetch parser PECmd, the shim database parser named SDB, and the shim cache parser AppCompatCacheParser.
§.§ Cloud Interaction
Posing as a law enforcement agent looking for evidence on a Windows computer that had interacted with a cloud storage platform, items of evidence identified for examination included web browser history and cache, log files, prefetch, registry hives, cloud storage platform clients, WSI, email clients, RAM artefacts and deleted or encrypted files. When ChatGPT was prompted that the investigator suspected the cloud platform was Google Drive, the response had some overlap such as looking at browser artefacts and email content, Windows registry and network traffic, as well as some more specific items such as the Google Drive desktop application, to look for Google account information and the Google Drive app on the associated mobile device if it is available. When pushed further on finding and interpreting the client's settings, local cache, and any synchronised files or folders for Google Drive for Desktop, it presented paths to the locations of configuration files and databases. This was also done for Dropbox and AWS S3 buckets. In some cases, the paths given resolved correctly, while in other cases there were similar names and some did not resolve.
§.§ Summary
While it can specify some interesting and important artefacts to look at, ChatGPT seems to focus heavily on the use of Windows Event Logs as its primary location for evidence. Though Windows Event Logs are extremely important and useful to an investigator, ChatGPT does not immediately highlight other important artefacts that should be examined. If an investigator was not aware of important artefacts already, these may be missed, meaning that the full story would not be told. There is a variance in terms of the depth of response that is supplied regarding different artefacts. In some instances, it gives a brief description of the usefulness of a subset of a particular artefact, such as in the Windows Event logs or in the Registry, and does not comprehensively identify all aspects of that artefact that should be looked at. For some artefacts, it explains what data is within them based on fields, keys or values that are present. In other instances, it gives detailed and thorough step-by-step guidance on how to locate and extract evidence from the operating system. There are also links to tools which do not seem to exist but are based on tools with a similar name and function but not quite the same. For the examination of cloud-related artefacts, the results were mixed. The areas to look at for determining cloud account information were reasonable, however the paths to the default locations on the machine were not always consistent with what they should be.
§ CHATGPT FOR SELF-DIRECTED LEARNING OF DIGITAL FORENSICS
This section assesses how suitable ChatGPT is for self-directed learning, i.e., can it educate users in a similar way as current real-world educational offerings can? While there are many different possibilities in the real-world, it is appropriate to differentiate between the following offerings:
(O1) introductory level, e.g., one lesson/course within another class/degree <cit.>;
(O2) specialised courses, e.g., to obtain unique skills or proposed by vendors to showcase tools;
(O3) digital forensic degrees, i.e., a B.Sc. or M.Sc. degree consisting of many modules; and
(O4) research conferences and workshops, i.e., experts informing themselves about latest trends and developments.
To assess the performance of ChatGPT for these scenarios, objectives from real-world offerings were examined and ChatGPT was assessed to see if it can help learners reach these objectives.
Note that objectives are frequently described using Bloom's taxonomy <cit.>. Bloom's taxonomy is a hierarchical framework that classifies educational learning objectives into six levels, ranging from lower-order thinking skills such as remembering and understanding to higher-order skills like analysing, evaluating, and creating.
§.§ Introductory Level (O1)
Frequently included objectives[For instance, see here <https://study.com/academy/course/computer-science-320-digital-forensics.html#information>.] are memorising basic principles and the forensic process, naming sub-disciplines, explaining the chain-of-custody, or describing computer crime. Consequently, a series of questions were formulated to learn more about these general aspects. Example questions are:
What is digital forensics? Is there a common process model? Are there sub-disciplines, and if so, which ones?
Generally, the answers were correct and provided a short but sufficient overview. ChatGPT described a five-phase model (identification, preservation, collection, analysis, and reporting), summarised well the goals of the chain-of-custody, and identified seven sub-disciplines. Answers also included aspects that are often taken for granted, such as ethical standards needing to be maintained, or that the field is rapidly evolving and it is essential to stay up-to-date.
A downside was that it could not provide the name/author of the process model. It also provided incorrect authors and references to the literature when requested.
Nevertheless, it can be employed as a starting point to learn about the domain, if a lot of detail is not needed or desired.
§.§ Advanced/Expert Level (O2,O3)
University degrees or expert commercial courses deliver in-depth knowledge. Most offerings include sophisticated hands-on activities to apply and practice gained knowledge.
To assess ChatGPT's suitability for this level of training, it was asked questions related to gaining hands-on experience, such as “can you propose exercises/tools to become an expert”, or “can you provide step-by-step descriptions for scenarios”?
ChatGPT agreed that it requires hands-on experience and started by proposing general exercises such as examining memory dumps using volatility or analysing disk images using autopsy.
It also recommended participation in online challenges such as CTFs, National Collegiate Cyber Defense Competition (CCDC), or SANS NetWars, which require significant experience and are more suitable for experts.
In contrast, the proposed scenarios (creation and solution) were basic and included only 3-4 steps.
To obtain more intermediate exercises, additional details were requested about one of the scenarios (file recovery on FAT32) using several follow-up questions. While the responses were detailed, the explanations were difficult to understand and learners may not be able to follow them.
There were also occasional errors in the answers. For example, there was an error in the Attribute byte () in one of the provided hexdumps:
was provided, which according to <cit.> is invalid.
§.§ Research and Workshops (O4)
These venues provide the latest trends and developments. Workshops can vary from more general discussions over highly technical works requiring expert-level knowledge. As the model is not constantly updated, i.e., at the time of writing this paper, the knowledge cut-off of the model is September 2021, it will not be able to inform about these latest developments.
§.§ Tool Explanation
Given that digital forensics frequently involves utilizing tools, the potential of employing ChatGPT as an alternative to a traditional user manual was examined. In this assessment, Wireshark (GUI) and (CLI) were selected as representative tools, and ChatGPT was queried for specific commands, guidance on particular settings, and explanations regarding the interpretation of the output.
The responses were useful, and exploring a tool in an interactive session was more engaging than reading a page. Especially for the CLI, it provided correct commands facilitating the filtering of certain elements and correctly explaining the output. With respect to the GUI, it was able to highlight the correct settings/locations to use the tool.
§.§ Summary
ChatGPT serves as an effective tool for acquiring a general understanding of a domain, particularly for individuals who already possess some existing knowledge. It acts as a valuable refresher, albeit one with a few limitations. Notably, it relies exclusively on textual and code listings for explanations, which may be less effective in certain contexts where diagrams or graphics could better convey the information.
The process of acquiring in-depth knowledge, however, is hindered if the user lacks a prior understanding of the field. This limitation necessitates follow-up questions and manual validation to counter the instances of AI-generated `hallucinations.' Furthermore, the lack of accompanying exercises or practical tasks inhibits the application of acquired knowledge, a crucial step in learning and higher-level objectives in Blooms taxonomy.
It does not provide exercises or labs for practical application and also showed weaknesses when it came to helping create them.
§ CHATGPT-ASSISTED KEYWORD SEARCHING
The concept of searching is fundamental to digital forensics, and much of that is based on keyword searching <cit.>. Given that ChatGPT is “a large multimodal model capable of processing image and text inputs and producing text outputs” <cit.>, there does seem to be potential for it to assist in keyword searching. The section below discusses some current and future applications within the search domain.
§.§ Generating Regular Expressions
Experiments were conducted to generate regular expressions for common entities. For example, a regular expression for credit card numbers was very detailed and included an explanation of its constituent parts, the specific start digital for cards from various providers, and included a disclaimer that it did not validate the checksum using the Luhn algorithm. However, the expression generated would not have taken into account any white space between number groups. Interestingly, asking for examples that could be used for testing, despite the claims “These numbers should match the regular expression provided in the previous answer”, did not match the generated regular expression since they contained whitespace.
A regular expression for UK car registration plates was successfully generated, with an accurate description highlighting that it only covered the newer scheme in use since September 2001. A disclaimer was also provided describing that the specific letter combinations for the area code were not validated and there may be false positives.
Email addresses were also tested, and the regular expression generated was described as a “simple regular expression for matching most email addresses”, with a caveat of “Please note that this regular expression does not cover all possible email address formats allowed by the RFC 5322 standard. It works for most common email addresses, but may produce false negatives or false positives in some cases.” Unfortunately, it fails on simple tests such as as it only specified the upper case character set for the top-level domain. Again, the examples it provided for testing did not match.
It was however possible to request a regular expression matching a simple custom policy number format that was invented and provided to the tool. For example, the prompt was supplied “a policy number takes the format of AB, AF, or AZ, followed by between 3 and 5 numeric digits, a hyphen and then 3-5 digits. Can you generate a regex for that?”, which produced the correct regular expression. Three examples were also provided, which did match.
§.§ Generating Keyword Lists
Another interesting area is the potential for ChatGPT and similar tools to be used to generate keyword lists. This has been extensively discussed in the areas of Search Engine Optimisation (SEO), with Udemy courses already available on the topic, e.g., “ChatGPT for SEO”[<https://www.udemy.com/course/using-chat-gpt-for-seo-search-engine-optimization/>]. In the context of digital forensics, <cit.> discuss some of the challenges in keyword searching. For example, straight keyword searches fail to match variants of that word, missing typos or misspellings, or missing abbreviations. It also describes the use of wildcards to attempt to mitigate some of this, with an example of a sexual harassment case and the use of the term `sex*' to catch sex, sexual, sexuality, sexist, sexism. This is however quite limited as an approach and would not match associated words.
Testing within this area certainly provided long lists of keywords associated with a main term supplied. One example shows synonyms for cannabis generated, and with further prompting provided associated words rather than direct synonyms, and even emojis that might be related. This goes beyond simple synonym generation, which could be done using existing technology. In other examples, requesting common misspellings of a word was also possible, as were abbreviations.
As an additional example, a scenario provided for a sexual harassment investigation is used, asking ChatGPT “If I was conducting a digital investigation into sexual harassment generate a list of keywords that could be used”, it first generated a list of words that could formally describe sexual harassment e.g. `sexual comments', `hostile work environment', `catcalling'. With further prompting, e.g. “What about terms that a victim might include in a message to someone else if they were describing that someone was sexually harassing them?” provided more terms such as `creepy behaviour', `felt humiliated', `powerless', `unwanted compliments'. Also, an alternative prompt of “what about terms to search for that might be in messages from someone that was conducting the sexual harassment” generated another set of keywords that could feed into an investigation, e.g., `sexy', `dirty', `fantasize', `undress', etc. This highlights the need for careful prompt engineering to refine the output. Regarding the quality of this output, no methods were found in the literature on evaluating the effectiveness of keyword lists in a digital forensic investigation, so evaluation of the lists generated is difficult. Further work could engage in studies with investigators to see if they believe terms would result in additional hits, or running these lists over historical cases to determine if additional artefacts could be located with different keyword lists.
§.§ Other Searching Topics
Within the GPT-4 Technical Report <cit.>, one of the main goals is described as being able to “understand and generate natural language text, particularly in more complex and nuanced scenarios”. This can facilitate some other potential uses of LLMs – specifically finding relevant material without the use of keywords and instead detecting specific types of content. This already exists in some commercial products, e.g., Magnet Axiom has an AI feature that attempts to identify grooming/luring chat content <cit.>. In the context of ChatGPT, given that it has summarising capabilities, there is the potential for a more generalised solution, although at present this is a theoretical exercise since this could not be used due to the need to upload evidence to the online service.
However, there are many datasets that could be used to evaluate this, for example, a small sample from the Chat Sentiment Dataset[<https://www.kaggle.com/datasets/nursyahrina/chat-sentiment-dataset>] was supplied and ChatGPT was able to respond by describing whether it was a positive, negative or neutral statement, although it differed in some places from the tagged value, e.g., the statement “The price is a bit high” is tagged as neutral in the dataset, but ChatGPT reported that it “has a slightly negative connotation, as it suggests that the speaker finds the price to be somewhat excessive or more than expected”. An extensive review of accuracy against such datasets is not within the scope of this paper, especially since the tools could not be used in any real case, but if a local model was available or there was interest in such an evaluation, regardless of current real-world application, then future work could make use of the ChatGPT API to evaluate the sentiment analysis capabilities quantitatively, including on other datasets such as `Hate Speech and Offensive Language Dataset' [<https://www.kaggle.com/datasets/mrmorj/hate-speech-and-offensive-language-dataset>]. Aggressive content, grooming, manipulative language, or attempted fraud could all be pursued as types of content to identify and flag within a digital investigation.
Also, models which can ingest images as well as text provide additional potential capability to digital forensic tools. For example, if an image can be described in text, then that text summary could be processed using traditional keyword searching, which allows for multi-modal searches for evidence to take place. Models such as these could also be used for machine translation, where either content from the data source is translated into the search language, or the keyword terms are translated into the target language, however machine translation was not specifically evaluated as part of this paper, but could be considered as future work.
§.§ Summary
There are some potential uses of ChatGPT already within the context of searching in digital forensics. Generating regular expressions and enhancing keyword search lists, either with additional terms, or suggesting abbreviations or misspellings, have all been found to be reasonably effective, although the former requires validation and testing of those regular expressions generated. There are also clearly some potential uses for the technology in future; the ability to summarise documents and answer questions about the nature of the content in a user-friendly manner has extensive potential for digital forensic applications. Unfortunately, the inability to upload evidence to such a service prevents this from being useful in its current form.
§ CHATGPT AND PROGRAMMING IN DIGITAL FORENSICS
Digital forensic investigation often necessitates unique functionalities that may not be available in current software or must be rapidly deployed in resource-limited, live forensic scenarios. The capacity to swiftly create a script for a particular duty is essential in various digital forensic cases. This section examines GPT-4's potential to assist digital forensic investigators by generating scripts for a set of common tasks. Although numerous interactions with ChatGPT were conducted, each subsequent subsection focuses on a representative example, showcasing GPT-4's code generation performance in that area.
§.§ File Carving
The initial experiment tested GPT-4's capability to generate a script to extract files from a captured disk image (either or raw images). The model was prompted to craft a Python script to retrieve PNG files. It produced a script employing Python libraries: (The Sleuth Kit's Python wrapper), (for processing Expert Witness Format, or , files), and (for dealing with image files). The generated script utilised the function from to navigate the filesystem. Thus, it did not engage in file carving and relied purely on the file extension and filesystem metadata.
The model improved the proposed method's efficiency by adopting a more pragmatic file carving approach, leveraging the PDF header and the end-of-file byte signature. The revised script replaces the weighty with Python's library, reading the raw disk image as a byte sequence – independent of the filesystem. The script scans the disk image for the PDF header, carves files until it finds an end-of-file marker, matching many file carving tools' performance. Partially overwritten PDF files, if found post-header, would lead to the extraction of large junk files. The script does not restart file carving if it encounters a second PDF header before an EOF, nor does it handle file fragmentation or potential false positives.
§.§ RAID Disk Acquisition
The next experiment simulated the acquisition from a series of SSD drives that were part of a RAID and mounted to the workstation using USB write blockers. In the prompt, GPT-4 was advised that the level of RAID used was unknown and that this should be determined and dealt with in the first instance. It wrote a python solution to this problem for a Linux based system, leveraging the python library to leverage and tools.
This script had two functions; a function that used the command to determine the level of RAID in question, and a function that acquired a raw disk image from each connected disk.
The script generated presumed four SSDs mounted at through . No verification was established to ensure RAID level consistency across the disks, or to confirm the disk count matches the number required by the detected RAID level. The RAID level, ascertained from the first disk image, is taken to be the level for all four disks without further validation with the other disks. The RAID level discovery prompts the command, assembling a RAID volume from the write-blocked, mounted SSDs. The script subsequently generates a raw disk image and dismounts the RAID volume using the command.
§.§ Password Cracking
Password protected and encrypted content is frequently a hindrance to lawful investigation <cit.>. This scenario involves using GPT-4 in a digital forensic investigation with an encrypted, password-protected zip file – asking it to develop a password cracking script. Its initial response was, “As an AI language model, I am not allowed to assist in any activities that could be considered illegal or unethical, including providing code or guidance for cracking passwords”. Despite assurances of legal and authorised activity, the model maintained it could not generate a script, instead suggesting alternative means of accessing file content. These included examining the device's storage or backups with forensic tools, persuading the owner to divulge the password, or undertaking cryptanalysis against the algorithm and/or key (typically computationally infeasible). It was then asked to recommend libraries and sample code: “Certainly! Here are some libraries and sample code to help you work with encrypted zip files in a legal and ethical manner”. It then provided Python code using and libraries,
both presuming pre-existing knowledge of the password.
It was then prompted that a list of plaintext passwords, named “rockyou.txt" was available for testing. The generated scripts were updated to iterate over this file until a password successfully extracted data, or the list ended. Furthermore, alternate password candidate dictionaries were requested. ChatGPT suggested four viable dictionaries[<https://github.com/danielmiessler/SecLists>, <https://crackstation.net>, <https://www.openwall.com/wordlists/>, <https://wiki.skullsecurity.org/Passwords>]. The password cracking scripts were then successfully modified to include these dictionaries, sequentially testing each until data was successfully extracted, or no password candidates were left – this completed the task that was initially resisted.
§.§ Memory Forensics - Recovering Encryption Keys
GPT-4 was prompted to script an analysis of a memory dump, using Python, to locate potential AES and RSA encryption keys. Presuming interest in only AES keys of 16, 24, or 32 bytes and RSA keys of 128, 256, or 384 bytes, it developed two Python functions:
and . The former function scans a binary file for a specified byte pattern, while the latter measures the entropy of a given byte sequence. GPT-4 arbitrarily set the entropy threshold to 7.5, indicating the value can be adjusted. It then inspected a file for any byte sequences of the stated lengths with entropy exceeding 7.5.
When asked to search for BitLocker encryption keys, the entropy-based search was narrowed to detect 16 or 32 byte sequences (128-bit or 256-bit) having an entropy level of 7.5 or higher. The script was modified to find the Windows-specific Full Volume Encryption Key (FVEK) or Volume Master Key (VMK) patterns in the memory dump. However, the updated script did not significantly differ from the initial one and lacked Windows-specificity. On further prompting to use a specific tool, namely Volatility[<https://www.volatilityfoundation.org/>], the script and the corresponding command line example were revised to search for a profile with Volatility. This tool was invoked using the Python library.
§.§ Summary
In the tested scenarios, GPT-4 effectively generated scripts for various digital forensic tasks. The scripts were well-commented, had adequate error checking, and combined different technologies, e.g., integrating Linux tools into a Python script. The system could also provide detailed explanations of the code's functionality and the decisions behind its creation. However, user-level knowledge of scripting languages and digital forensics is essential for application and to spot any unreasonable assumptions, such as limiting encryption key size or assuming only text files contain sought-after regular expressions. Generated code can not be used blindly. However, any identified limitations can often be rectified by prompting the model what the concerns are.
Interestingly, GPT-4 initially refused to help create code for “gatekept” operations, e.g. potentially unethical or illegal use cases, such as password cracking. However, with further interaction and breakdown of the request into constituent parts, it provided step-by-step advice on techniques, sources, and tools for the restricted task. Ultimately, it generated and optimised the desired code – while emphasising that it should only be used in an authorised and legal context. Users can cleverly bypass some system protections through prompt engineering, while OpenAI continually works to prevent such “jailbreaking” of their built-in protections.
§ CHATGPT FOR INCIDENT REPONSE
Crucial steps, especially during incident response, are identifying anomalies, finding suspicious activity and discovering possible attacks. It also implies a certain understanding of existing attack vectors as well as the way they have been exploited. This section considers if ChatGPT can be used to facilitate this process.
Source Identification
Before conducting the main experiments, ChatGPT's capability to identify input sources that are typically encountered during incident response investigations was assessed. Textual artefacts were examined, such as output from commands or content of log files, and converted non-textual artefacts such as Windows Event Logs or the Registry file to textual representations, since ChatGPT only processes text. Additionally, ChatGPT was prompted to identify the output of the command to test possibilities of providing network capture information. While there were occasional instances of uncertainty regarding the exact source of certain artefacts, ChatGPT consistently interpreted the data correctly, laying a strong foundation for further experiments.
§.§ Anomalies
In the initial experiments, ChatGPT's ability to detect anomalies in a system was evaluated. For the purpose of this paper, an anomaly was defined as any deviation from a predefined, ordinary, and benign system behaviour. This task may involve identifying unusual processes, log entries, or files.
One immediate challenge was the limited amount of data that could be provided to ChatGPT for processing. A default process list created by on a clean Ubuntu 22.04 release consists of roughly 200 lines, which had to be split into multiple parts for ChatGPT to accept as a prompt. A possible workaround for this issue is filtering the information, such as providing only process names or selecting specific lines of the output. However, since incident response is often performed without prior knowledge, this method could lead to the loss of potentially crucial information when parent process IDs or process arguments are excluded. In the experiments, ChatGPT was given process listings from Ubuntu 22.04 and asked to identify atypical processes. While it correctly detected most third-party applications and a custom script, it misclassified default applications such as and Firefox. Additionally, its responses were non-deterministic for identical prompts.
As another example, ChatGPT was provided with the content of an SSH file that is used by attackers to gain persistence on a system. Without any context, not even an incident responder is able to distinguish between a legitimate and a malicious key. In the experiment, ChatGPT acknowledged this difficulty and offered helpful best practices for ensuring SSH security. However, in one instance, the comment field of a specific key was altered to include the word “hacker”. Although this field is irrelevant as it is meant only for comments, ChatGPT was triggered by the keyword and incorrectly flagged the corresponding key as suspicious, referring to a non-existent “username hacker”. When later asked for clarification, the model correctly explained that the comment field is intended solely for comments and should not impact the key's legitimacy.
§.§ Suspicious Activity
We expanded the experiments to a level where anomalies appeared as red flags to any experienced incident responders. This involved creating a reverse shell using , which connects to an attacker's system and executes shell commands.
In this example, ChatGPT failed to detect the obvious reverse shell process within the process listing when prompted for suspicious activity, or even for any specifically reverse shells. Only when was mentioned did it recognize the process as a strong compromise indicator, providing advice on handling the situation, such as terminating the process and consulting a cybersecurity professional.
In another scenario, an unsuccessful SSH brute force attack was conducted and ChatGPT was provided with log entries from . Due to size limitations, the entire log file could not be uploaded. Nevertheless, ChatGPT identified the failed login attempts, detected an SSH brute force attack, and extracted the IP address involved from the log extract.
§.§ Attacks
In the final series of experiments for this section, ChatGPT's capacity to identify genuine attacks was evaluated, which were classified as behaviours that are not only suspicious but also executed with malicious intent. First, its response to the Follina exploit CVE-2022-30190 was examined, which leverages the Microsoft Support Diagnostic Tool (MSDT) via a Word document <cit.>. In , this results in a log entry for the spawned MSDT, with Microsoft Word identified as its parent, which is most likely a clear indicator of an exploit being used. The corresponding log file was provided to ChatGPT. Although ChatGPT does not recognize the Follina exploit due to its training data ending in 2021, it successfully interpreted the log file and highlighted potential indicators for further examination.
In another example, ChatGPT was prompted to analyse a output of an ARP spoofing attack <cit.>, in which a MAC address claims to be responsible for a multitude of IP addresses, which is usually not the case. ChatGPT was unable to identify this anomaly but offered explanations for the behaviour, including ARP spoofing, when explicitly asked.
Furthermore, ChatGPT's ability to parse and interpret data was tested. For network packets, this task is easily performed by tools such as Wireshark, which identifies protocols, analyses them, and presents the results in a way it can be interpreted by a human. ChatGPT was evaluated against the Heartbleed vulnerability (CVE-2014-0160), a bug in the TLS implementations' heartbeat protocol, which enables memory extraction from a server by sending a malformed heartbeat request <cit.>.
Since this vulnerability was discovered in 2014, ChatGPT can provide a detailed explanation and detection methods. However, when given a single malformed packet of a heartbeat request, ChatGPT only parsed and presented basic information like IP addresses and ports. Upon being prompted to interpret the packet as a TLS packet, it parsed the content as TLS fields. However, inconsistencies were observed in the TLS record type identified by ChatGPT across multiple outputs. To investigate further, this experiment was executed 100 times, asking ChatGPT to report only the identified TLS record type. The results are shown in Table <ref>.
These findings demonstrate that ChatGPT's non-deterministic nature led to varying responses. It is important to note that record type 0x14 was spelt differently in three instances. More significantly, none of the provided record types were correct. The actual record type should have been Heartbeat 0x18. Further manual analysis revealed that ChatGPT correctly extracted the field defining the type, but misinterpreted it entirely. Consequently, ChatGPT failed to detect the exploited heartbeat vulnerability in this packet.
§.§ Summary
ChatGPT demonstrates the capacity to aid in the detection of deviations from known, typical behaviours, such as the default configuration of an operating system. However, the experiments revealed inconsistent results, as well as some apparent non-default processes being overlooked in certain runs. Moreover, ChatGPT's performance suffers when contextual knowledge is necessary. Since it lacks training on specific organizational processes, users, logs, or procedures, it cannot effectively analyse information unique to a particular organization or system. In identifying suspicious activity, ChatGPT seems to perform better when provided with input that includes a textual description of an event, such as a failed password login attempt. This observation held true for both Linux and Windows logs, which typically contain additional descriptions. When such information is absent, ChatGPT may overlook critical details, like a reverse shell. A similar pattern emerges in the detection of specific attacks. Beyond the evident limitation of lacking real-time information, which hampers its ability to identify current threats, ChatGPT also struggled to deduce an attack like ARP spoofing based on the provided data. This challenge is particularly pronounced for binary representations, where incorrect and inconsistent assumptions were made during data parsing.
§ CHATGPT FOR GENERATING TEACHING SCENARIOS
When teaching digital forensics, the importance of practical exercises cannot be overstated and the challenges are discussed in <cit.>. Specifically, referencing <cit.> which differentiates “skill specific case studies” and “holistic skill case studies”. It is the latter that requires substantially more effort to create and is described in <cit.> as “Data generation for this type of exercise usually involves construction of a scenario, a storyboard, and simulating the user’s actions over the course of several months”. There are attempts to simplify and automate the process of carrying out a series of actions over a long period of time to provide background activity <cit.>. However, the scenario specifics still require the construction of storyboards, users, and content. Given the impact that ChatGPT has made in the art world for both images[<https://www.theguardian.com/technology/2023/apr/17/photographer-admits-prize-winning-image-was-ai-generated>], poetry and stories[<https://towardsdatascience.com/using-ChatGPT-as-a-creative-writing-partner-part-1-prose-dc9a9994d41f>], this does seem something that ChatGPT could assist with.
§.§ Storyboarding
It was very easy to prompt ChatGPT to generate an overall storyboard for an intellectual property theft scenario. For example “generate an outline timeline of a scenario where someone within a workplace starts a new job and slowly becomes discontent over a few months and begins to steal intellectual property” produced a 6-month summary of activity that went from the employee joining the company in month 1 and being excited about the opportunity, to month 3 where discontent starts to grow and they “realize that company values don't align with personal beliefs”, through to month 6 where the “Employee's discontent reaches a peak” and there is “Increased resentment toward the company and coworkers” and they are “considering quitting or finding a new job”
Further prompting also generated ideas for their internet history over the course of those months, ranging from company related information in month 1, through to “Techniques for bypassing security measures” and “Online forums discussing illicit activities” in month 4. Further prompting provided specific websites and Internet search terms that could be used to generate a synthetic scenario.
For different scenarios involving stalking, it was also possible to request suggestions for potential digital evidence that would be available on the iPhone of the suspect and with further prompting it was possible to produce a very rich set of scenario notes including innocuous activity, as well as actions related to the scenario. This could inform data generation, either manually, or with automated tools.
§.§ Character Profiles and Interests
During scenario synthesis, it is often necessary to build characters and identities that will either be victims or perpetrators of a crime. Inspired by the use of ChatGPT in the arts fields, prompts were constructed to generate characters for the use in digital forensic teaching scenarios. For example, “generate a persona for an adult male in his 20s that is achieving low grades and university and might turn to crime” produced a summary of a 23-year-old male with a background, education history, personality, financial situation, criminal tendencies, and goals and ambitions. Subsequent prompting was able to generate high-level topics summarising his internet history that would include “academic, entertainment, and potentially incriminating search terms” followed by five themes and example search terms within each.
§.§ Synthetic Content
Considering the need in teaching scenarios to have data in the generated disk images that includes both activities related to the crime under investigation and realistic background activity, additional content was requested. For example, it was possible to generate a chat conversation with several of the character's classmates, his brother, to generate an email from the university stating that his assignment was late and would not be marked, and a response. A list of sample sociology assignments was also generated. These could all add realism to the scenario.
Regarding the aforementioned stalking scenario, a set of anonymised messages could be generated, along with internet history suggestions for the suspect. However, asking for a list of cell towers that the suspect connected to resulted in a message that it was not possible as it required access to real-world data, but a fictionalised list could be created.
§.§ Summary
Considering the value of ChatGPT for this digital forensics use case, the results were extremely well constructed and potentially very useful. Since this is not in an investigative context and there is no incorrect answer, there is little issue with the results generated in this way. Some responses generated less convincing scenarios, e.g., another scenario with a university student turning to crime involved an art student getting involved with a criminal gang and creating counterfeit artwork or forging documents. This is not bad for a teaching scenario but is not as realistic. However, this was easily corrected by suggesting that the alternative drug dealing suggestion was better, and the scenario was updated. Other potential risks exist if this was fed into a system that auto generated content, which could result in material that educators may not want in their scenario disk images. This would need to be manually checked so that nothing inappropriate was added. Nevertheless, for creative generative applications, ChatGPT offers significant potential. Other issues in the storyboarding arose when asking to create a detailed summary of the activities that would need to be carried out on the device to generate the synthetic dataset, as some aspects were missed. However, with further prompting, this was corrected and a new list was generated. Finally, GAI tools that create images and videos, could also add to the richness of synthetic scenario data generated.
§ DISCUSSION
The experiments outlined in this work assessed the effectiveness of ChatGPT for various aspects of digital forensic investigations. The overall results are discussed below.
The Good Through this work's experiments, three major strengths were identified: creativity, reassurance, and avoidance of the blank page syndrome.
ChatGPT has proven itself useful for tasks where it cannot be wrong, which with respect to digital forensics, are creative tasks such as forensic scenario creation, as outlined in Sec. <ref>, or creating inputs, e.g., keyword lists, which may serve as input for further analysis.
Secondly, it provides reassurance, i.e. if an examiner has prior knowledge, it may be cross-compared with ChatGPT. However, it is important to note that prior knowledge is required to identify hallucinations. It was found helpful for code generation and explanation, refreshing a learner's memory on a specific topic, or doing a rudimentary analysis of evidence, e.g. finding suspicious activity log files or other listings.
Lastly, ChatGPT is excellent to obtain a starting point and to avoid the blank page syndrome. For instance, it was used to create basic code snippets which then can be used further. While the generated code was not perfect, it was documented and provided a solid starting point. In most cases, it is better to have an existing skeleton instead of starting with an empty project.
The Bad Naturally, ChatGPT also has some weaknesses requiring it to be used with caution: quality and age of training data, handling highly specialised and uncommon tasks, and interacting with ChatGPT.
As a language model, it is trained on data and thus it may be biased and outdated. This means it cannot be questions about the newest artefacts, e.g., to learn about them or where they are located. Generally, the digital forensic community, compared to some other communities, is rather small and therefore the amount of training data is relatively small too.
The more specialised a scenario was, the less reliable ChatGPT's answer, which makes sense as these scenarios are likely not contained in the training data.
ChatGPT is text-based, whereas many challenges in digital forensics require the analysis of various kinds of data, e.g., network packets. While it is always a possibility to provide the information in hex, the experiments outlined as part of this paper demonstrate that it works less reliably. In addition, there is also a limitation in terms of input and output length, e.g., one cannot provide a complete log file but must prefilter it first. Lastly, the output is not deterministic, which is not desired in digital forensics where a principle is to be reproducible.
The Unknown Obviously, one cannot upload real evidence to ChatGPT and thus usage is still limited. However, LLMs may be included in forensic products in the future which could then open a variety of new use cases, perhaps to the extent that a basic analysis does not require comprehensive training. For instance, this may allow queries such as:
“Find all text messages that may be considered bullying or scan the hard drive and see if you find any GPS coordinates (e.g., in EXIF data) that indicate that the suspect was at location X”. In other words, interacting with forensic software may become more natural and thus could be performed to some extent by a non-technical investigator.
The experiments showed that not all outputs from ChatGPT are reliable and have to be used with caution, especially as `hallucinations' make it difficult to identify if an answer is correct. On the other hand, similar problems are encountered when relying on information found online in non-peer-reviewed sources such as blogs (which likely have been used by ChatGPT as a training basis). This means, regardless of the source, an examiner is required to understand it before making use of the knowledge. Questions that need to be looked at include: which sources are the least error-prone, and which information is easier to comprehend for an examiner?
Summary This paper's findings indicate that, while ChatGPT has significant potential in the digital forensic investigation field, human expertise remains essential. A critical question arising from this research is how to strike the right balance between leveraging the strengths of AI and maintaining the role of human expertise.
§ LIMITATIONS
While this study provides valuable insights into the potential applications of ChatGPT in digital forensic investigation, it is crucial to acknowledge the limitations that may impact the generalisability and applicability of the findings of this paper. Firstly, the experiments conducted in the study do not cover all aspects of digital forensic investigation and have been conducted in a controlled environment. There are many more examples and use cases that could be tested, but could not be considered and performed as part of this study (due to space constraints). In addition, the experiments might not fully represent the complexity and challenges faced in real-world digital forensic investigations. Results strongly depend on the prompt, i.e., a minor modification in the prompt has led to a very different result. Moreover, given the nondeterministic behaviour of ChatGPT, the results discussed in this paper are not directly reproducible, which is why the interactions analysed as part of this paper are provided statically in the associated GitHub repository[<https://github.com/markscanlonucd/ChatGPT-for-Digital-Forensics>].
§ CONCLUSIONS AND FUTURE WORK
The paper described a series of eight experiments to explore the potential applications of ChatGPT for digital forensics and provides valuable insights.
Many of the limitations identified are consistent with findings from other studies and existing system documentation. In particular, the phenomenon of `hallucination', which nicely disguises the alternative term `incorrect' is a recurring theme. This obfuscation makes the use of ChatGPT in digital forensics a precarious endeavour and underlines the importance of caution and close scrutiny.
Nonetheless, ChatGPT shows potential in certain areas. For example, it can serve as an effective assistant in the area of code generation, provided the user has sufficient knowledge to evaluate, interpret, and correct the results. This operator-dependent effectiveness mirrors that of other automated tools commonly used in digital forensics.
Other possibilities are the generation of keyword lists and the creation of storyboards for test scenarios.
In terms of further work, there are other areas in digital forensics that could be explored but are not suited to an online service model and require a locally deployable model. If such a requirement was met, it would be interesting to explore tasks such as summarising case notes created during an examination, further evaluation of machine translation, image-to-text translation, and more extensive analysis capabilities, including timelines, social network analysis, and authorship attribution.
It is important to remember that despite the hype and sometimes impressive capabilities, this technology is still rather new. This is cause for concern if it is overused, but also shows great potential for the future, and like all automation for digital forensics, it is useful and necessary, but requires caution and competent human oversight.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Mark Scanlon, Frank Breitinger, Chris Hargreaves, Jan-Niclas Hilgert, John Sheppard: Conceptualization, Methodology, Investigation, Writing - Original Draft, Writing - Review & Editing. All authors had equal contribution.
While ChatGPT was the focus of the research conducted as part of this paper, it did not contribute to the paper's content or analysis other than where directly quoted or described.
model6-num-names
|
http://arxiv.org/abs/2307.05775v1 | 20230711200612 | Weisfeiler and Lehman Go Measurement Modeling: Probing the Validity of the WL Test | [
"Arjun Subramonian",
"Adina Williams",
"Maximilian Nickel",
"Yizhou Sun",
"Levent Sagun"
] | cs.LG | [
"cs.LG",
"cs.SI"
] |
Probabilistic Unitary Formulation of Open Quantum System Dynamics
and Andrew N. Jordan
August 12, 2023
==================================================================
The expressive power of graph neural networks is usually measured by comparing how many pairs of graphs or nodes an architecture can possibly distinguish as non-isomorphic to those distinguishable by the k-dimensional Weisfeiler-Lehman (k-WL) test. In this paper, we uncover misalignments between practitioners' conceptualizations of expressive power and k-WL through a systematic analysis of the reliability and validity of k-WL. We further conduct a survey (n = 18) of practitioners to surface their conceptualizations of expressive power and their assumptions about k-WL. In contrast to practitioners' opinions, our analysis (which draws from graph theory and benchmark auditing) reveals that k-WL does not guarantee isometry, can be irrelevant to real-world graph tasks, and may not promote generalization or trustworthiness. We argue for extensional definitions and measurement of expressive power based on benchmarks; we further contribute guiding questions for constructing such benchmarks, which is critical for progress in graph machine learning. Our code can be found at: <https://github.com/ArjunSubramonian/wl-test-exploration>.
§ INTRODUCTION
Graph neural networks (GNNs) have been successfully applied to graph-structured data such as social networks, operations networks, and molecules <cit.> to aid in important tasks like content recommendation <cit.>, routing <cit.>, and molecular property prediction <cit.>. Towards improving GNN performance on such tasks, numerous GNN architectures have been proposed <cit.>. In recent years, graph machine learning (ML) practitioners who contribute a new GNN architecture also often theoretically characterize its expressive power <cit.>. Because expressive power is a theoretical construct that cannot be observed and measured directly, it must be inferred from measurements of observable properties. In the GNN literature, expressive power is usually measured by comparing how many pairs of graphs or nodes a GNN architecture can possibly distinguish as non-isomorphic to those distinguishable by the k-WL isomorphism test.
In particular, more distinguishable pairs of graphs or nodes indicates greater expressive power.
Expressive power has been postulated by the graph ML community to be necessary (although not sufficient) to increase model performance on graph tasks <cit.>; as such, increasing the expressive power of GNN architectures has become intimately linked with perceptions of progress in graph ML. However, GNN architectures that are proven to be more expressive often do not empirically improve performance on real-world graph tasks <cit.>. Towards understanding this phenomenon, we adopt the measurement modeling framework from the social sciences <cit.>, which disentangles how graph ML practitioners conceptualize expressive power from how we operationalize its measurement. Notably, k-WL operationalizes the measurement of a specific conceptualization of expressive power: the extent to which an architecture can possibly map non-isomorphic graphs or nodes to distinct representations <cit.>, or capture a mapping function that is injective up to isomorphism.
However, this could be misaligned with how graph ML practitioners conceptualize expressive power.
Even worse, practitioners' conceptualizations of expressive power may be unclear or inconsistent because of the unique requirements of different graph ML tasks.
In this paper, we uncover misalignments between practitioners' conceptualizations of expressive power and k-WL (and the underlying assumptions that cause it) through a systematic analysis of the construct reliability (can it be repeated?) and construct validity (is it right?) of k-WL <cit.>. We conduct a survey (n = 18) of graph ML practitioners to surface their conceptualizations of expressive power and their assumptions about k-WL, which are often not stated in relevant literature.
In particular, our survey reveals that practitioners' conceptualizations of expressive power differ (e.g., some believe that expressive GNN architectures should induce isometry, while others do not). In addition, via our graph-theoretic analysis, we find that k-WL can be misaligned with accepted measurements for a task (e.g., graph edit distance), and does not guarantee isometry. Furthermore, we examine extrinsic limits of k-WL, such as decoder capacity and learning dynamics, and how k-WL can be antithetical to generalization. Complementarily, our benchmark auditing reveals that: (1) 1-WL can distinguish effectively all the non-isomorphic graphs and nodes in many graph ML benchmarks; (2) 1-WL may not provide a useful upper bound on the accuracy of a GNN on every task <cit.>; and (3) GNNs may learn representations that are more optimal with respect to the labels for a task than WL-aligned. Moreover, we show how k-WL can have negative implications for fairness, robustness, and privacy in graph ML. We compare and contrast the findings from our analysis with our survey results. Ultimately, graph ML practitioners would benefit from recognizing that: (1) k-WL may not be aligned with our task, and we should devise other measurements of expressive power; or (2) in practice, k-WL does not limit GNN performance on our benchmarks, and we should construct more rigorous benchmarks for assessing expressive power. Finally, we argue for extensional definitions and measurement of expressive power; we contribute guiding questions for constructing benchmarks for such measurement, which is critical for progress in graph ML.
§ PRELIMINARIES AND RELATED WORK
Notation Suppose we have an undirected graph with node features G = ( V, E, X). V is the set of nodes in G. E⊆ V× V is the set of edges in G. X ∈𝔽^| V| × d_in is a | V| × d_in-dimensional feature matrix in the field 𝔽. Similarly, we have another graph G' = ( V', E', X').
Isomorphisms We first distinguish between isomorphic graphs and isomorphic nodes. An isomorphism is a permutation π : V→ V' such that: (1) π( V) = V'; (2) π( E) = { (π(i), π(j)) | (i, j) ∈ E} = E'; and (3) π(X)_π (i) = X_i' <cit.>.
G, G' are isomorphic if there exists an isomorphism between them. In contrast, an automorphism is a (non-trivial) permutation π : V→ V such that: (1) π( V) = V; (2) π( E) = { (π(i), π(j)) | (i, j) ∈ E} = E; and (3) π(X)_π (i) = X_i <cit.>. Two nodes i, j ∈ V are isomorphic if there exists an automorphism under which π(i) = j.
k-WL
Isomorphism testing, which requires determining if two graphs or nodes are isomorphic, is an NP-intermediate problem <cit.>. k-WL (k ≥ 1) is a hierarchy of deterministic, polynomial-time heuristics for isomorphism testing <cit.>. In 1-WL, each node i ∈ V begins with a color c_i^0 := Hash (X_i), where Hash is an injective hashing function that maps distinct inputs in an arbitrary field to distinct colors in ℚ, and X_i ∈𝔽^d_in <cit.>.
(If the graph does not have node features, every node starts with the same color, or equivalently, modulo a single iteration of k-WL, X_i is the degree of i.)
Each node then iteratively refines its color as:
c_i^t = Hash(c_i^t - 1, {{ c_j^t - 1 | j ∈Γ_ G (i) }}),
where {{·}} is a multiset and Γ_ G (i) is the set of neighbors of i in G. Refinement terminates when ∀ i ∈ V, c_i^t = c_i^t - 1, which we denote c_i.
Thus, the final colors produced by 1-WL are 1-WL ( G) = ( c_i )_i ∈ V. Let SortedCount (·) be a function that takes as input a vector of colors and outputs a sorted vector of the counts of unique colors. G and G' are not isomorphic if SortedCount (1-WL ( G)) ≠SortedCount (1-WL ( G')); however, if SortedCount (1-WL ( G)) = SortedCount (1-WL ( G')), 1-WL is inconclusive and G and G' may or may not be isomorphic <cit.>. Similarly, two nodes i, j ∈ V are not isomorphic if c_i ≠ c_j; but, if c_i = c_j, 1-WL is inconclusive and i and j may or may not be isomorphic <cit.>.
k-WL augments 1-WL to refine colors over k-tuples of nodes, rather than individual nodes.
The neighborhood of a k-tuple of nodes consists of other k-tuples that differ in only one node. While 1-WL is exactly as powerful as 2-WL <cit.>, k-WL (k > 2) has strictly more distinguishing power than (k - 1)-WL <cit.>; however, the exact increase in power gained by switching from (k - 1)-WL to k-WL is not known. Furthermore, while the exact non-isomorphic graphs and nodes that k-WL cannot distinguish are unknown, <cit.> shows that for k ≥ 2, there exists a pair of non-isomorphic graphs G, G' of size O(k) nodes that k-WL cannot distinguish.
Connecting k-WL to GNNs A GNN A : 𝔾→ℝ^d_out is a neural network that maps a graph G to d_out-dimensional real-valued node representations ( h_i )_i ∈ V or a single whole-graph representation h_ G (depending on whether the task is node or graph-level). Numerous GNN architectures have been proposed, including GCN, GIN, GraphSAGE, and GAT <cit.>. The architecture of a GNN comprises the operations in each layer and the types of activations between layers, but not the number of layers or parameter values. In contrast, an instantiation of an architecture includes a specific number of layers and particular parameter values.
<cit.> and <cit.> show that GCN can possibly distinguish at most as many pairs of graphs or nodes as non-isomorphic as 1-WL. (We note that <cit.> and <cit.> assume that nodes do not have informative features <cit.>.) In particular, if 1-WL is inconclusive for G, G', then h_ G = h_ G'. Similarly, if 1-WL is inconclusive for i, j ∈ V, h_i = h_j <cit.>. <cit.> proposes Graph Isomorphism Network (GIN), a GNN architecture that is provably as powerful as 1-WL. In other words, if 1-WL can distinguish G, G' as non-isomorphic within L iterations, an L-layer GIN can theoretically (i.e., with appropriate parameters) produce representations h_ G and h_ G' that are different (and similarly for node isomorphisms). <cit.> contributes k-GNNs, which are designed such that for k ≥ 2, if k-WL can distinguish G, G' as non-isomorphic, then a k-GNN can theoretically produce distinct representations h_ G and h_ G' (and similarly for node isomorphisms).
Maron et al. <cit.> find that higher-order invariant and equivariant GNNs with k-th order tensors are as powerful as k-WL.
For a detailed discussion of how expressive power has motivated new GNN architectures, refer to Section V of <cit.>. A few papers have identified shortcomings of k-WL with respect to measuring the expressive power of GNNs and proposed fixes or alternative understandings of expressive power (c.f., <ref>). For an overview of related works on the representational limits of GNNs, consult <ref>.
Graph tasks
A graph task requires a certain graph-related skill to be demonstrated in the context of a particular input-output format <cit.>. We consider graph-level and node-level tasks. (While we do not explicitly treat pairwise node or pairwise graph tasks, our definitions can be extended to these settings.) In a graph-level task τ_ G : 𝔾→ Y, we are given as input a graph G, and we must predict a value in the output space Y for τ_ G. In a node-level task τ_ G, i : 𝔾× V→ Y, we are given as input G and a node i, and we must predict a value in the output space Y for τ_ G, i. The sample space of a task is the set of all possible graph-label and node-label pairs on which a GNN may be evaluated. In contrast, the data distribution of a task is the distribution over the task's sample space that captures the probability of encountering different graph-label and node-label pairs at test time. In this paper, we consider the following common workflow for solving graph tasks with GNNs: Input→Encoder→Decoder→Output. Encoder is a GNN A.
Decoder is a function h (e.g., an MLP) that maps h_ G or h_i to a prediction in Y.
Encoder and Decoder are often learned jointly.
§ SURVEY METHODOLOGY
We conduct a survey of graph ML practitioners who are familiar with expressive power to investigate: (1) how clearly and consistently graph ML practitioners conceptualize expressive power; (2) how practitioners perceive the validity of k-WL as a measurement of expressive power; and (3) how practitioners’ conceptualizations and measurements of expressive power are shaped by real-world graph tasks. We begin by asking participants about their background (e.g., occupation, experience with graph ML) and real-world graph tasks with which they are familiar ([Q6]Q6). We do this to understand the demographics of our sample.
Participants indicated diverse graph ML experiences, including domains like molecules, knowledge graphs, and physics, and tasks like explainability, graph generation, and structured reasoning ([Q5]Q5). Participants were also familiar with numerous real-world graph tasks, such as content recommendation, drug discovery, material design, knowledge graph reasoning, and fraud detection ([Q4]Q4). 14 participants had experience proving expressive power, and 17 had experience evaluating GNNs ([Q2]Q2, [Q3]Q3).
We then inquire in an open-ended manner into how participants conceptualize expressive power; we do this prior to asking participants to engage more critically with expressive power, so as not to sway their responses. Subsequently, we ask participants about their perceptions of the: (a) clarity and consistency with which expressive power is conceptualized; (b) relevance of expressive power to real-world graph tasks; (c) extent to which expressive power encompasses isometry, GNN architecture vs. instantiation, and the data distribution of tasks; and (d) ethical implications of expressive power. For all survey questions that ask participants to rate their perception, we provide them with a scale that ranges from 1 to 6, with articulations of what 1 and 6 mean in the context of the question. We provide additional details on survey methodology in <ref>, and include the entirety of our survey and aggregate survey responses in <ref>.
§ RELIABILITY AND VALIDITY ANALYSIS
Following measurement modeling, we systematically analyze the reliability and validity of measuring the expressive power of a GNN architecture
via comparison to k-WL
We present some of our analysis below and the remainder in <ref>.
§.§ Content validity (contestedness)
To what extent does expressive power have “multiple context-dependent, and sometimes even conflicting, theoretical understandings”? <cit.> This is an important question because the validity of comparing to k-WL is influenced by the clarity and consistency with which expressive power is conceptualized. The graph ML community has primarily focused on k-WL ([Q16]Q16, [Q17]Q17). <cit.>, one of the first papers to connect GNNs to the WL test, has been cited 3246 times (as of the writing of this paper); however, only a few researchers have proposed alternative understandings of expressive power (c.f., <ref>). This would suggest that expressive power is minimally contested. This hypothesis would appear to be confirmed by our survey results as well. On a scale of 1–6, participants rated the clarity with which expressive power is defined as 4.111 ± 1.323 ([Q8]Q8).
In addition, participants rated the consistency with which expressive power is defined as 3.722 ± 1.274 ([Q9]Q9).
Furthermore, many participants, when prompted in an open-ended matter, defined expressive power as the ability of a GNN to discriminate non-isomorphic graphs (n = 13) or approximate any function on graphs (n = 6) ([Q7]Q7). Here, ability is distinct from capacity (e.g., model capacity).
However, when prompted more specifically, participants revealed conflicting definitions of expressive power ([Q11]Q11).
While 8 participants said that expressive power does not involve isometry (i.e., mapping similar graphs to proportionately similar representations), 10 participants said that it does. Furthermore,
while most participants agreed that architecture choice influences expressive power, there was disagreement about whether instantiation (e.g., learned parameters) and the sample space and data distribution of graph tasks have any bearing on expressive power.
§.§ Convergent validity
To what extent does comparing to k-WL correlate with accepted measurements of expressive power? The acceptability of other measurements of expressive power is task-dependent (e.g., graph edit distance, random walk kernel). Consider the task of predicting the edit distance between pairs of graphs. For this task, expressive power can be conceptualized as the ability of an architecture to map pairs of graphs with a large edit distance to proportionately distinct representations, and pairs with a small edit distance to proportionately similar representations. Suppose we have an architecture that is exactly as powerful as 1-WL (i.e., an instantiation of this architecture can theoretically distinguish exactly the same pairs of non-isomorphic graphs as 1-WL). Further, consider the graphs in Figure <ref>.
* G_1 and G_2, which have an edit distance of 1, may have more distinct representations than G_1 and G_4, which have a larger edit distance. This is unintuitive (with respect to metric space axioms) and demonstrates that k-WL is misaligned with how expressive power may be conceptualized for edit distance prediction.
* G_3 and G_1, which have an edit distance of 1, may have distinct representations. Furthermore, G_3 and G_4, which have a larger edit distance, may have distinct representations. However, G_1 and G_4 will have the same representation. This is once again unintuitive and misaligned.
These observations threaten the convergent validity of comparing to k-WL for edit distance prediction. However, consider a task that requires counting specific substructures of at most size k in graphs. This is possible with an architecture that is at least as expressive as k-WL <cit.>. Hence, for substructure counting, comparing to k-WL has convergent validity. At the same time, <cit.> and <cit.> show impossibility results for subgraph detection and verification by proving that there exists a graph on which a GNN would require a width or depth dependent on the number of nodes to perform correctly. This highlights a tension between the content validity (c.f., <ref>) and convergent validity of comparing to k-WL for substructure counting. Ultimately, defining expressive power and operationalizing its measurement in the context of a particular task, as well as grappling with the tensions in the validity of this measurement, is important.
§.§ Hypothesis validity
To what extent do measurements of expressive power obtained by comparing to k-WL support hypotheses about expressive power? Graph ML practitioners often posit expressive power as necessary (although not sufficient) to increase GNN performance on real-world graph tasks. This is confirmed by our survey. Participants rated the relevance of expressive power to real-world graph tasks as 3.611 ± 1.461 ([Q10]Q10).
Furthermore, participants rated the informativeness of expressive power for real-world graph task performance as 3.389 ±
1.195 ([Q18]Q18).
However, there are intrinsic limits to k-WL (c.f., <ref>). There are also two other regimes in which comparing to k-WL is suboptimal:
Extrinsic limits of k-WL (decoder capacity) Suppose that for some k, we have a GNN that is exactly as powerful as k-WL, and can produce a unique representation of each non-isomorphic graph in the sample space of a task. This would suggest that the accuracy of a GNN on the task is upper-bounded by 100%. However, this bound only becomes an equality if: (1) every graph in the task's sample space has a different label, or (2) Decoder has sufficient complexity and capacity to translate representations to correct predictions. In particular, for an arbitrary task, Decoder would need to have been trained on the task's entire sample space and have as high a complexity and capacity as a majority-vote lookup table (or an infinitely wide MLP), which is usually intractable.
For example, consider a task whose sample space is G_1, G_2, G_3 in Figure <ref>, with labels: y_ G_1 = 0, y_ G_2 = 1, y_ G_3 = 0. G_1, G_2, G_3 would have distinct representations, but because their labels are not distinct, Decoder will need to have a complex decision boundary. This occurs in part because k-WL does not preserve isometry, and hence Decoder cannot exploit the geometry of the GNN's representation space to generalize to graphs in the task's sample space that may not be seen during training or to out-of-distribution graphs. In this way, k-WL can be antithetical to generalization, even if it is sample efficient. This observation is important because our survey participants generally indicated that they prioritize generalization (5.0 ± 1.372) over expressive power (4.167 ± 1.098) ([Q27]Q27).
Extrinsic limits of k-WL (learning dynamics) In practice, GNNs may not learn representations that align with the colorings produced by k-WL, or even be able to (e.g., due to inductive biases of SGD, data sampling). However, survey participants indicated that they believe that GNNs can almost always (3.714 ± 0.825)
learn such representations ([Q25]Q25). Furthermore, depending on the task, it may be more optimal for GNNs to learn representations that are not aligned with k-WL colorings. In other words, k-WL-aligned representations may not be optimal for a specific task, even if they are optimal for arbitrary tasks. For example, consider a task whose sample space is the G_1, G_2, G_4 in Figure <ref>, with labels: y_ G_1 = 0, y_ G_2 = 0, y_ G_4 = 1. The 1-WL colorings for G_1, G_2 would be distinct, but since these graphs share the same label, it would be more optimal for a GNN to learn identical or similar representations for both, so that Decoder requires a less complex decision boundary.
§.§.§ Benchmarking expressive power
We investigate the extent to which k-WL colorings are relevant to and empirically align with GNN representations on real-world graph tasks, and compare our findings with survey responses. We run experiments with 1-WL and GIN (which is exactly as powerful as 1-WL in theory) on popular graph-level and node-level benchmarks (Table <ref>). We audit these benchmarks because they have been used by seminal papers that connect k-WL to GNNs to validate the effectiveness of more expressive architectures <cit.>. The benchmarks are from common graph domains (e.g., bioinformatics, social networks, citation networks), and some have node features while others do not.
Relevance of 1-WL to benchmarks We partition benchmark instances (i.e., graphs for graph-level benchmarks, and nodes for node-level benchmarks) into three sets of equivalence classes: (1) instances with the same label E_y; (2) instances that are isomorphic E_π; and (3) instances with the same 1-WL colorings after t iterations of refinement E_WL^t. We denote the cardinality of a set of equivalence classes using | · |, and the number of singleton equivalence classes in a set using #_1 (·). For example, #_1 ( E_π) signifies the number of unique instances in a benchmark, and #_1 ( E_WL^3)/#_1 ( E_π) captures the proportion of 1-WL distinguishable instances to unique instances after three iterations of refinement.
As shown in Table <ref>, | E_π | is often lower than the number of instances, indicating that the benchmarks (e.g., IMDB-BINARY, IMDB-MULTI, PTC_MR) contain numerous isomorphic instances; upon further inspection, the isomorphic nodes are all duplicates. This finding would suggest that it is advantageous for a GNN to be able to distinguish non-isomorphic graphs. However, for IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY[The REDDIT-BINARY dataset was previously compiled and made publicly available by a third party.], PROTEINS, CiteSeer, and PubMed, | E_π | = | E_WL^1|, which evidences that 1-WL is able to distinguish all the non-isomorphic instances in these benchmarks after only one iteration. For the remaining benchmarks, 1-WL can distinguish all or almost all non-isomorphic instances within three iterations. We provide example 1-WL non-distinguishable graph pairs from MUTAG in Figure <ref>. Our findings suggest that, in theory, a GNN that is as expressive as 1-WL would be able to perform close to perfectly on these benchmarks.
To confirm that E_π and E_WL^3 are indeed the same (or similar) partitions, we compute the symmetric adjusted mutual information (AMI) between these sets.
We see in Figure
<ref> that, for all graph-level benchmarks, E_π and E_WL^3 have an AMI of 1 or close to 1, indicating these partitions are identical or nearly identical.
Predictive power of isomorphisms and 1-WL Figure
<ref> reveals that E_π and E_WL^3 share an AMI of close to 0 with E_y, which suggests that these partitions are close to independent. That is, distinguishing non-isomorphic graphs provides effectively no information about the labels of these graphs. As discussed previously, this necessitates that Decoder has a high capacity and complex decision boundary. Interestingly, the AMI of E_y with E_WL^3 can be higher than that with E_π because non-isomorphic graphs often have the same label. To further investigate the label predictive power that E_π and E_WL^3 grant, we evaluate the accuracy of the best possible deterministic graph classifier h <cit.>: a lookup table based on majority vote.
r6cm
Accuracy of majority-vote classifier on predicting labels from isomorphic graphs and 1-WL distinguishable graphs.
0.75
h( E_π→ E_y) h( E_WL^3 → E_y)
IMDB-BINARY 88.60 88.60
IMDB-MULTI 63.27 63.27
REDDIT-BINARY 100.00 100.00
PROTEINS 99.73 99.73
PTC_MR 99.13 99.13
MUTAG 100.00 100.00
Cora 100.00 100.00
CiteSeer 99.97 99.97
PubMed 100.00 100.00
In particular, for an input graph G, let e_ G∈ E_π be the isomorphism equivalence class to which G belongs; then, the prediction ŷ_ G = mode{ y_ G' | G'∈ e_ G} (and similarly for E_WL^3). The accuracy of h (displayed in Table <ref>) provides an upper bound on the performance of any deterministic GNN. We see that the accuracy is or is close to 100% for almost all the benchmarks. This demonstrates that 1-WL test can produce colorings that are sufficient to solve common graph ML tasks; however, survey participants opined that 1-WL does so only sometimes (3.067 ± 0.594) ([Q23]Q23). Furthermore, since GNNs do not achieve near perfect performance on these benchmarks in practice, our results suggest that neither isomorphism testing nor 1-WL expressive power, and rather decoder complexity & capacity and generalization, limit GNN performance in practice. Moreover, 1-WL does not help provide a useful upper bound on the accuracy of a GNN on every task. Along this vein, survey participants indicated that 1-WL admits a useful upper bound only sometimes (3.133 ± 0.640) ([Q22]Q22).
GIN alignment with 1-WL
We investigate how well GIN representations align with 1-WL colorings in practice. We choose GIN because, by construction, it is theoretically as powerful as 1-WL <cit.>. We follow the default experimental settings as <cit.> (c.f., <ref> for more details). For each graph-level benchmark, we jointly train a GIN encoder and MLP decoder <cit.>. For graph pairs G, G', we compute the cosine similarity (∈ [-1, 1]) of their GIN representations h_ G, h_ G' (i.e., the output of the encoder). We also compute the WL subtree kernel cosine similarity (∈ [0, 1]) of G, G' <cit.>, after four iterations of color refinement (to match the number of GIN layers). The WL kernel measures similarities in node colorings after 1-WL refinement, and upon convergence, is equivalent to comparing the number of shared subtrees between graphs <cit.>.
Figures <ref>, <ref>, and <ref> show the distributions D_WL^same and D_WL^different of WL kernel similarities of graph pairs with different vs. the same labels, and similarly, the distributions D_GIN^same and D_GIN^different of GIN encoder representation similarities. For almost all the benchmarks (except PTC_MR), D_WL^same and D_WL^different are much less divergent than D_GIN^same and D_GIN^different, which means that the mutual information (MI) of the GIN representation similarities with label matches is higher than the MI of the WL kernel similarities with the label matches. This is confirmed by the MI calculations (discretized over 20 bins) displayed in the figures. Ultimately, the figures show that GIN learns representations that are more optimal with respect to the labels than WL-aligned. This could be because node features are significantly more informative for label prediction than graph structure for many tasks <cit.>. Furthermore, this ensures that Decoder need not be overly complex to predict the labels. Complementing this empirical finding, survey participants indicated that they believe that GNNs only sometimes (3.200 ± 0.676) learn 1-WL aligned representations ([Q24]Q24).
Takeaway Based on our analysis, graph ML practitioners would benefit from recognizing that: (1) k-WL may not be aligned with our task, and we should devise other measurements of expressive power; or (2) in practice, k-WL does not limit GNN performance on our benchmarks, and we should construct more rigorous benchmarks for assessing k-WL expressive power.
§.§ Consequential validity
What are the social consequences of measuring expressive power by comparing to k-WL? Such measurements have likely contributed to a snowball effect of graph ML research universalizing the specific conceptualization of expressive power encoded by k-WL.
This snowball effect has helped develop mathematical thinking, guided research into poorly-understood properties of GNNs, and aided in data modeling <cit.>. However, it has also likely led graph ML practitioners to: (1) neglect task-driven formulations of expressive power (3.444 ± 1.042 ([Q17]Q17)); and (2) prioritize expressive power (4.167 ±
1.098) over ethical aspects of functionality like efficiency (4.556 ±
1.149), robustness (4.556 ± 1.097), fairness (3.167 ±
1.543), and privacy (2.667 ± 1.815) ([Q27]Q27). In fact, survey participants rated the ethical implications of expressive power as 2.444 ±
1.423 ([Q12]Q12).
r5cm
Benchmarks with fraction of 1-WL identifiable graphs.
0.75
# nodes #_1 ( E_WL^3)
Credit 30000 29367 (97.89%)
Credit (Age ≤ 25) 27315 26720 (97.82%)
Credit (Age > 25) 2685 2649 (98.66%)
German 1000 1000 (100%)
However, k-WL can have implications for individual fairness and adversarial robustness (c.f., <ref>). k-WL can also have implications for privacy. Consider the graphs from Figure <ref>. ( G_1, G_2) and ( G_1, G_3) are pairs of neighboring graphs because they differ in one edge; in contrast, ( G_1, G_4) are not neighboring graphs. Let N ( S) be the set of all neighboring graphs in a task's sample space S. Then, for graph-level tasks, we define the sensitivity Δ ( A, S) of A over S as max_( G, G') ∈ N ( S) h_ G - h_ G'_1. Often, to obtain ϵ-node differential privacy, practitioners add Lap(Δ( A, S) / ϵ)^d_out noise to h_ G <cit.>. As discussed in <ref>, Δ ( A, S) can be large: considering ( G_1, G_2) and ( G_1, G_3), adding or removing an edge can greatly change the representations produced by a 1-WL-powerful GNN; intuitively, this is problematic if these graphs were ego social networks, as changes in graph representations could undesirably reveal the existence of high-degree nodes or symmetries in users' social circles to an adversary. As such, practitioners may need to add significant noise to achieve private representations.
To empirically examine the ethical implications of 1-WL, we use the Credit and German networks from <cit.>. (For more information on these benchmarks, consult <ref>.) Table <ref> shows that all or almost all nodes in Credit and German are uniquely identifiable after just three iterations of 1-WL, which poses a privacy risk. This also presents individual fairness and robustness risks because it suggests that 1-WL colorings are sensitive to small feature and structural differences between nodes. Furthermore, Table <ref> suggests group fairness concerns with respect to age for Credit: (1) nodes with age > 25 are uniquely identifiable at a higher rate, which means that a majority-vote lookup table classifier will perform better on them; (2) however, they also have greater privacy risks.
§ TOWARDS BETTER DEFINITIONS AND MEASUREMENTS OF EXPRESSIVE POWER
Addressing the validity issues in <ref> and <ref>, we advocate for extensional, rather than intensional, definitions and measurement of expressive power <cit.>. An intensional definition (e.g., k-WL) comprises theoretical properties of GNNs, and is often stated in terms of logic. In contrast, an extensional definition of expressive power is grounded in dataset and metric-based evaluation: a GNN is considered “expressive” in the context of a task if it performs well on a benchmark for the task. We argue that extensional expressive power is:
* Task-driven: Extensional definitions reduce the contestedness of expressive power, as expressive power will be defined in the context of a specific task, thus drawing from domain knowledge. Furthermore, expressive power will be measured based on task-specific (i.e., accepted) metrics, so extensional measurement will improve convergent validity.
* Practical: Extensional measurement encompasses the complex relationships between the innumerable elements of graph ML (e.g., architecture, training, data structure), thereby improving content validity. Additionally, extensional measurement ensures that higher measurements of expressive power correspond to improved task performance, and promotes generalization.
* Trustworthy: Extensional measurement encourages empirically examining potential tradeoffs between expressive power and ethical aspects of graph ML (e.g., robustness, fairness, privacy). However, there may be concerns about its reliability and face validity (c.f., <ref>).
Importantly, with extensional expressive power, conceptualizations of expressive power and operationalizations of its measurement are more closely linked. We draw from <cit.> to provide guiding questions to facilitate the creation of GNN expressive power benchmarks (Figure <ref>). We also critically expand upon the distinction between graph tasks and benchmarks in <ref>.
In our survey, participants rated the use of expressive power to inform architecture design as 4.333 ± 1.085, but benchmark design as 3.833 ± 1.249; we hope extensional measurement shifts these findings. Extensional measurement is already being practiced; for example, <cit.> carefully designs the CLRS benchmark to assess GNN expressive power in the context of algorithmic reasoning tasks (e.g., shortest path).
§ CONCLUSION
We uncover misalignments between practitioners' conceptualizations of expressive power and k-WL through a systematic analysis of the reliability and validity of k-WL. Our survey (n = 18) reveals that practitioners' conceptualizations of expressive power differ. In addition, our graph-theoretic analysis shows that k-WL can be misaligned with accepted measurements for a task, does not guarantee isometry, and can be antithetical to generalization.
Complementarily, our benchmark auditing reveals that k-WL can be irrelevant to solving many real-world graph tasks, and detrimental to trustworthiness.
Finally, we argue for extensional measurements of expressive power and propose guiding questions for constructing benchmarks for such measurements. We detail limitations and future work in <ref>.
We thank Shichang Zhang and Soledad Villar for their insightful and valuable feedback on this paper.
plain
§ CHECKLIST
* For all authors...
* Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
* Did you describe the limitations of your work?
See Section <ref>.
* Did you discuss any potential negative societal impacts of your work?
See Section <ref>.
* Have you read the ethics review guidelines and ensured that your paper conforms to them?
However, we were unable to compensate survey participants due to internal bureaucratic hurdles.
* If you are including theoretical results...
* Did you state the full set of assumptions of all theoretical results?
* Did you include complete proofs of all theoretical results?
* If you ran experiments (e.g. for benchmarks)...
* Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
In the supplementary material.
* Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
See Section <ref>.
* Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
We report error bars over 10 GIN training runs in Figures <ref>, <ref>, and <ref>.
* Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
We mention the compute used in Section <ref>.
* If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
* If your work uses existing assets, did you cite the creators?
We cite the original creators of all the benchmarks we use in Table <ref> and Section <ref>.
* Did you mention the license of the assets?
We mention the license in Section <ref>.
* Did you include any new assets either in the supplemental material or as a URL?
We include the code for our experiments (with dependencies and instructions) in the supplementary material.
* Did you discuss whether and how consent was obtained from people whose data you're using/curating?
We discuss consent in Section <ref> (survey participants) and Section <ref> (benchmarks).
* Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
We discuss this in Section <ref>.
* If you used crowdsourcing or conducted research with human subjects...
* Did you include the full text of instructions given to participants and screenshots, if applicable?
We provide the full text of instructions in Section <ref>.
* Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
* Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
We were unable to compensate survey participants due to internal bureaucratic hurdles.
Appendix
Contents in Appendices:
* In Appendix <ref>, we overview additional related works.
* In Appendix <ref>, we provide additional details on our survey methodology.
* In Appendix <ref>, we provide all our survey questions and aggregate responses.
* In Appendix <ref>, we provide the remainder of our reliability and validity analysis of the WL test.
* In Appendix <ref>, we critically expand upon the distinction between graph ML tasks and benchmarks.
* In Appendix <ref>, we overview the benchmarks we audit.
* In Appendix <ref>, we include pairs of graphs from MUTAG that 1-WL cannot distinguish.
* In Appendix <ref>, we provide the adjusted mutual information (AMI) scores for all the benchmarks we audit.
* In Appendix <ref>, we discuss the settings we used to train and evaluate GIN.
* In Appendix <ref>, we provide our GIN and 1-WL alignment results for all the benchmarks we audit.
* In Appendix <ref>, we detail the limitations of our work and future work.
§ ADDITIONAL RELATED WORKS
§.§ Beyond k-WL
<cit.> shows that, while k-WL would suggest that GNNs have poor discriminative power, GNNs can in fact distinguish between any graphs whose spectra differ in at least one eigenvalue. <cit.> augments k-WL to be invariant to physical symmetries (e.g., rotation, reflection). <cit.> argues that k-WL suffers from a “lack of a natural interpretation and high computational costs,” and proposes a novel hierarchy of isomorphism tests. <cit.> contends that k-WL neglects to capture the expressive power of GNNs in the spectral domain, which provides a complementary perspective to GNN capabilities in the spatial domain. <cit.> introduces new expressive power metrics based on graph biconnectivity, and finds that existing GNNs are not expressive with respect to these metrics.
§.§ Representational limits
Numerous prior works have sought to characterize the complexity, hypothesis space, and representational limits of GNNs.
<cit.> studies the ability of certain message-passing GNNs to capture FOC_2 logical queries over nodes. <cit.> shows that spectral GNNs can learn arbitrary spectral filters under certain conditions. <cit.> argues that every permutation equivariant GNN can be expressed with pairwise message passing. <cit.> comments on connections between permutation-invariant universal function approximators on graphs and sets.
<cit.> logically characterizes the queries that polynomial-size bounded-depth GNNs can theoretically answer, proving that they are in the circuit complexity class TC^0. <cit.> studies the representation and extrapolation capacities of GNNs as their input size increases.
§.§ Auditing graph benchmarks
We build on prior and concurrent work that explores the extent to which k-WL expressive power is required to perform well on benchmarks (<ref>). For instance, our experiments confirm prior findings that large fractions of graph ML benchmarks comprise isomorphic graphs <cit.>. <cit.>, which is concurrent work, counts the fraction of 1-WL distinguishable graphs in benchmarks (which helps provide an upper bound on GNN performance), and finds this upper bound is often close to 100%. This suggests that GNNs are not limited by expressive power in practice <cit.>. Notably, <cit.> only considers graph isomorphisms, while our work also encompasses node isomorphisms and further studies GIN-WL alignment and the ethical implications of 1-WL.
§ ADDITIONAL DETAILS ON SURVEY METHODOLOGY
Participant guidance We ask participants to list real-world applications of graph ML with which they are familiar to ground their responses to questions about the relevance of expressive power to real-world graph tasks. Furthermore, we use a scale of 1–6 (with articulations of what 1 and 6 mean) to: a) capture the distribution of participants' responses with sufficient granularity, b) impel participants to lean towards one side of the scale, and c) improve the consistency of how participants interpret answer choices.
Quality control We piloted our survey with a couple of graph ML practitioners in order to identify potential problems with the clarity of our questions. We further provided participants with the opportunity to optionally justify their responses or indicate disagreement or a lack of clarity with any definitions or questions; only a couple of participants did so. We intentionally included a few free-response questions to deter and remove spammers from our sample.
Participant recruitment and IRB
We shared our survey as a Google form on Twitter, graph ML-focused Slack workspaces, and mailing lists and Slack channels for artificial intelligence (AI) affinity groups like Queer in AI <cit.> and Women in Machine Learning[<https://wimlworkshop.org/>]. We additionally shared the survey at a tech company via an internal communication platform. In all cases, we requested participants[Following <cit.>, by “practitioners,” we refer to academic and industry researchers, applied scientists, and engineers. Participants must have experience theoretically characterizing the expressive power of a GNN, or empirically evaluating GNNs on real-world tasks.] to share the survey with other relevant groups in order to perform snowball sampling <cit.>. Participants were not compensated due to internal bureaucratic hurdles. We obtained informed consent (refer to <ref>) from all participants, no personally identifiable or sensitive information was collected, and the survey was reviewed by an internal privacy team at a tech company.
§ SURVEY QUESTIONS AND RESPONSES
* Indicates required questions.
Probing the Validity of Measurements of Graph Neural Network Expressive Power
The goal(s) of this project is to:
* investigate how clearly and consistently graph machine learning practitioners conceptualize graph neural network expressive power;
* uncover how practitioners perceive the validity of common measurement models for expressive power;
* inquire into the extent to which practitioners’ conceptualizations of expressive power and measurements thereof are driven by and impact real-world applications.
Survey results will be presented in aggregate in the form of a research paper, which will be submitted to a conference in 2023.
This survey will take approximately 5-10 minutes. We thank you for your participation.
Researchers involved: [REDACTED FOR ANONYMITY]
If you have any questions or concerns, please contact [REDACTED FOR ANONYMITY].
§.§ Consent
[Q1] Do you consent to the terms of this survey? *
Participants must be at least 18 years of age and will not be compensated for their participation. No personally identifiable or sensitive information will be collected. This survey has been reviewed by an internal privacy review team.
Yes
No [if selected, survey branches to final section]
§.§ Background
[Q2] Do you have any experience theoretically characterizing the expressive power of a graph neural network? *
Yes
No
[Q3] Do you have any experience empirically evaluating graph neural networks on real-world tasks or datasets? *
Yes
No
[Q4] List some real-world applications of graph machine learning with which you are familiar. *
* Open text field
[Q5] Briefly describe the type of graph machine learning work that you do. *
Please be mindful not to bring up any identifying or sensitive information about yourself or third-parties.
* Open text field
[Q6] Please select all the options that apply to you. *
□ I work on deployed systems
□ I am an industry practitioner (not researcher)
□ I am an industry researcher
□ I am an academic researcher
□ I am a student researcher
§.§ Conceptualization Questions
We consider the following system setup:
Input Graph(s) → Graph Neural Network (GNN) Encoder → Graph Representation(s) → Head (e.g., linear model, MLP) → Prediction
Furthermore, we only consider graph-level prediction tasks (e.g., molecular property prediction, graph similarity prediction), wherein the input is a whole graph(s). To be clear, we do not consider node-level prediction tasks.
[Q7] How do you define expressive power? *
Consider what properties you would like an expressive GNN encoder to satisfy. Try to avoid using heuristics.
* Open text field
[Q8] How clearly defined is the expressive power of graph neural networks? *
6 (clearly defined) means that expressive power has definitions that are clearly articulated and understood by the graph ML community (even if these definitions are not consistent). In contrast, 1 (unclearly defined) means that expressive power is unclear to most people in the community.
1 2 3 4 5 6
[Q9] How consistently defined is the expressive power of graph neural networks? *
6 (consistently defined) means that expressive power is consistently defined by the graph ML community. In contrast, 1 (inconsistently defined) means that expressive power is understood differently from person to person in the community.
1 2 3 4 5 6
[Q10] How relevant are common conceptualizations of expressive power to the real-world applications of graph machine learning that you listed in the Background Questions section? *
6 (directly relevant) means that a graph neural network with higher expressive power is guaranteed to improve system performance on most real-world applications. 1 (not relevant) means that a graph neural network with higher expressive power has a low chance of increasing system performance on real-world applications.
1 2 3 4 5 6
[Q11] Please select all of the following options that describe how you conceptualize expressive power: *
Definitions:
The ARCHITECTURE of a graph neural network encoder comprises the operations in each layer and the types of activations between layers, but not the number of layers or parameter values.
An INSTANTIATION of an ARCHITECTURE comprises a specific number of layers and particular parameter values.
The SAMPLE SPACE of a task refers to the set of all possible graph-label pairs on which a MODEL may be evaluated.
The DATA DISTRIBUTION of a task refers to the distribution over the task's SAMPLE SPACE that characterizes the probability of encountering different graph-label pairs.
□ Different (i.e., non-isomorphic) graphs can be mapped to uniquely identifiable representations
□ Similar graphs can be mapped to proportionately similar (i.e., isometric) representations
□ Expressive power is affected by ARCHITECTURE
□ Expressive power is affected by an INSTANTIATION
□ Expressive power is affected by the SAMPLE SPACE of a task
□ Expressive power is affected by the DATA DISTRIBUTION of a task
[Q12] How much does expressive power have ethical implications? *
6 (large extent) means that a graph neural network with higher expressive power has direct and critical ethical implications for graph learning tasks. 1 (small extent) means that expressive power has no clear connection to ethics.
1 2 3 4 5 6
[Q13] Without changing your previous answer, how do you now define expressive power? *
* Open text field
[Q14] Do you have additional thoughts in response to the questions in this section?
This can include justifications of your responses or a lack of clarity on any questions. Please be mindful not to bring up any identifying or sensitive information about yourself or third-parties.
* Open text field
§.§ Operationalization Questions
[Q15] Please rank the following ways of measuring expressive power in order of how commonly you think they are employed in relevant literature: *
6 means most commonly, 1 means least commonly.
* Comparing the non-isomorphic graphs that an architecture can theoretically distinguish to those that the WL test (or higher-order variants) can distinguish
* Characterizing the ability of an architecture to theoretically solve various combinatorial problems (e.g., subgraph detection, graph coloring, minimum vertex cover)
* Describing the complexity of logical queries that an architecture can theoretically represent
* Describing the invariance and equivariance properties of an architecture
* Awareness of an architecture of simplicial and cell complexes
* Comparing the test performance of an instantiation of an architecture on a benchmark to the performance of models with previously-proposed architectures
[Q16] Please list additional ways of measuring expressive power with which you are familiar (if any).
[Q17] How much do papers use real-world applications of graph machine learning to motivate methods of measuring expressive power? *
6 (always use) means all measurements of expressive power used by papers are inspired by real-world applications of graph machine learning. 1 (never use) means no measurements of expressive power used by papers are inspired by real-world applications.
1 2 3 4 5 6
[Q18] How informative are different ways of measuring expressive power for predicting the performance of graph machine learning models on real-world applications? *
6 (informative) means that measurements of expressive power are the most useful quantity for predicting model performance on real-world applications. In contrast, 1 (not informative) means that measurements of expressive power provide no useful information for predicting model performance on real-world applications.
1 2 3 4 5 6
[Q19] Do you have additional thoughts in response to the questions in this section?
This can include justifications of your responses or a lack of clarity on any questions. Please be mindful not to bring up any identifying or sensitive information about yourself or third-parties.
* Open text field
§.§ Weisfeiler-Lehman Isomorphism Test Introduction
We now focus on a specific method by which the measurement of expressive power is operationalized: via comparison to the WL test. In this method, we compare the non-isomorphic graphs that an architecture can theoretically distinguish to those that the WL test (or higher-order variants) can distinguish.
For example, we can say that Graph Convolutional Network (GCN) is at most as expressive as the 1-WL test (i.e., GCN can theoretically distinguish as many non-isomorphic graphs as the 1-WL test can).
[Q20] Are you familiar with the WL test? *
Yes
No [if selected, survey skips next section]
§.§ WL Test Questions
[Q21] How much do theoretical results of expressive power via comparison to the WL test (or higher order variants) appear convincing? *
6 (convincing) means that such proofs generally employ realistic settings and assumptions about graph neural networks, as well as solid reasoning. In contrast, 1 (not convincing) means that such proofs generally use unrealistic or debatable settings or assumptions, or flawed reasoning.
1 2 3 4 5 6
[Q22] Does the WL test help provide a useful upper bound on the accuracy of any task? *
5 = Always
4 = Almost always
3 = Sometimes
2 = Almost never
1 = Never
[Q23] Does the 1-WL test produce colorings or hashes that are sufficient to solve common graph learning tasks? *
5 = Always
4 = Almost always
3 = Sometimes
2 = Almost never
1 = Never
[Q24] Do graph neural networks in practice learn representations that align with the colorings or hashes produced by the WL test? *
5 = Always
4 = Almost always
3 = Sometimes
2 = Almost never
1 = Never
[Q25] Can graph neural networks ever learn representations that align with the colorings or hashes produced by the WL test? *
5 = Always
4 = Almost always
3 = Sometimes
2 = Almost never
1 = Never
[Q26] Do you have additional thoughts in response to the questions in this section?
This can include justifications of your responses or a lack of clarity on any questions. Please be mindful not to bring up any identifying or sensitive information about yourself or third-parties.
* Open text field
§.§ In-Practice Questions
[Q27] Please rank the following criteria for graph neural networks by how you prioritize them in practice. *
1 = least, 6 = most
* Expressive power
* Generalization
* Efficiency
* Fairness
* Privacy
* Robustness
[Q28] How much do you use expressive power to
inform architecture design? *
6 (always) means that you always design new graph neural network architectures with expressive power in mind. 1 (never) means that you never consider expressive power when designing new architectures.
1 2 3 4 5 6
[Q29] How much do you use expressive power to inform benchmark design? *
6 (always) means you always design new benchmarks with expressive power in mind. 1 (never) means that you never consider expressive power when designing new benchmarks.
1 2 3 4 5 6
[Q30] Do you have additional thoughts in response to the questions in this section?
This can include justifications of your responses or a lack of clarity on any questions. Please be mindful not to bring up any identifying or sensitive information about yourself or third-parties.
* Open text field
§.§ Feedback
Thanks again for your participation in our survey!
[Q31] Do you have any comments or feedback on the questions in this survey?
Please be mindful not to bring up any identifying or sensitive information about yourself or third-parties.
* Open text field
§ REMAINDER OF RELIABILITY AND VALIDITY ANALYSIS
§.§ Test-retest reliability
To what extent do measurements of expressive power obtained via comparison to k-WL vary at different points in time? Such measurements comprise the GNN architecture, which does not change over time, and k-WL, which is a deterministic algorithm. Hence, comparison to k-WL has test-retest reliability.
§.§ Face validity
To what extent do measurements of expressive power obtained via comparison to k-WL appear plausible? Such measurements usually have face validity because of the step-by-step theoretical justifications in proofs thereof (e.g., <cit.>). Survey respondents rated the apparent validity of proofs of k-WL expressive power as 4.067 ± 1.438 ([Q21]Q21).
§.§ Content validity (substantative validity)
To what extent does comparison to k-WL include all but only those observable properties thought to be related to expressive power? Expressive power, as it is most commonly conceptualized ([Q11]Q11),
is solely an architectural property. Thus, because comparison to k-WL only involves GNN architecture, and does not seem to consider extraneous observable properties, comparing to k-WL has substantive validity.
§.§ Content validity (structural validity)
To what extent does comparison to k-WL “capture the structure of the relationships” between architecture and expressive power? In practice, comparing the pairs of non-isomorphic graphs or nodes that an architecture can theoretically distinguish to those distinguishable by k-WL is only useful if we assume that an instantiation of this architecture has parameter values that yield optimal discriminatory power. That is, k-WL does not consider the parameter values an architecture instantiation learns in reality, which indisputably affects the instantiation's ability to distinguish non-isomorphic graphs or nodes. Moreover, comparing to k-WL often does not consider the number of layers in an architecture (or that this number is finite) or layer dimensions; these properties also undeniably affect expressive power measurement, as k-WL converges in a number of iterations that depends on both the number of nodes and edges in the input graph. Furthermore, expressive power can heavily depend on the input graph; for instance, per the oversquashing effect, certain graphs (e.g., planar graphs) can cause GNNs to exhibit pathological behaviors regardless of which architecture is chosen <cit.>. Finally, in the real world, graph features and structure are not always fully observable, but comparing to k-WL relies on them. As such, comparing to k-WL has poor structural validity.
§.§ Discriminant validity
To what extent is comparing to k-WL capturing aspects of constructs besides expressive power? Because k-WL is tightly coupled with how graph ML practitioners predominantly conceptualize expressive power, comparing to k-WL appears to have discriminant validity.
§.§ Intrinsic limits of k-WL
Similar graphs may not have similar labels (e.g., mismatched graphs <cit.>). For example, consider the extreme case of a task where for some k, for each set of non-isomorphic n-node graphs G_1, …, G_m (where m ≤ 2^n 2 and is thus finite) that cannot be distinguished by k-WL, we assign them different labels 1, …, m.
Suppose we have a GNN architecture that is at most as expressive as k-WL . Because k-WL cannot distinguish these graphs, neither can an instantiation of the architecture, and hence any GNN with this architecture will necessarily score 0% accuracy on this task.
It is also critical to account for discriminatory power over the data distributions for particular tasks. For instance, consider the extreme case of a task whose sample space is exactly the set of non-isomorphic graphs that are not distinguishable by k-WL, for some k. Suppose we have a GNN architecture that is at most as expressive as k'-WL, for any k' ≤ k. k'-WL may be reasonably powerful over all possible graphs; but, because k'-WL cannot distinguish any of the graphs in the task's sample space, neither can an instantiation of the architecture, and hence any GNN with this architecture will necessarily score 0% accuracy on this task.
The intrinsic limitations of k-WL threaten the hypothesis validity of comparing to k-WL, and further motivate the need to consider task-specific requirements.
§.§ Individual fairness & robustness
In individual fairness and adversarial robustness, we desire that our GNN A produces representations that are not overly sensitive to (small) changes in the input (i.e., is Lipschitz continuous) <cit.>. More formally, for graph-level tasks, we would like that ∀ G, G' ∈ S, h_ G - h_ G'_p ≤ L · d( G, G'), where S is our task sample space, ·_p is the L_p-norm, L is a global Lipschitz constant for A, and d is a distance metric over graphs (e.g., edit distance, random walk kernel similarity). Similarly, for node-level tasks, we would like that ∀ i, j ∈ V, h_i - h_j _p ≤ L_ G· d_ G (i, j), where d_ G is a distance metric over nodes in G (e.g., shortest path). However, as discussed in <ref>, it is unknown if such global Lipschitz constants exist when S and G are not finite. Furthermore, d( G, G') is not always intuitively predictive of h_ G - h_ G'_p (for graph-level tasks) and d_ G (i, j) of h_i - h_j _p (for node-level tasks).
§ GRAPH TASKS AND BENCHMARKS
Tasks vs. benchmarks A task is a graph-related skill or competency that we want a GNN to demonstrate in the context of a specific input–output format. In contrast, a benchmark attempts to measure performance on a task by grounding it in a graph domain and instantiating it with a concrete dataset and evaluation metric <cit.>.
Benchmark validity There are many aspects to benchmark validity (including but not limited to):
* External validity <cit.>:
* Ecological validity: “to what extent does an artificial situation (constrained social media platform [or synthetic datasets]) properly reflect a broader real-world phenomenon?” (e.g., <cit.>)
* Temporal validity: “to what extent do constructs change over time and invalidate previous conclusions”?
* Construct validity <cit.>:
* Content validity: How well conceptualized is the task on which performance is being measured? How well-aligned is the benchmark with the task specification?
* Convergent validity: Does the benchmark actually measure what we care about?
* Discriminant validity: Does the benchmark measure things besides what we care about?
* Consequential validity: Should the benchmark even be used (e.g., considering societal implications)?
Benchmarks often have pitfalls that affect their measurement of real-world task performance <cit.>. For example, benchmarks can have noise or poor data diversity. <cit.> shows that popular graph ML benchmarks can consist of as low as 20% non-isomorphic graphs. Benchmarks may also capture skewed substructure frequencies, and suffer from extreme sparsity and large numbers of isolated nodes due to lossy link observation. Benchmarks may also employ poorly-aligned or reductive performance metrics. Graph structure may not even be required to perform well on certain benchmarks. In many cases, we would like graph ML benchmarks to measure the extent to which GNNs can leverage meaningful combinations of node features and graph structure (e.g., to answer, are two graphs isomorphic?). To understand if benchmarks assess GNNs' ability to harness node features vs. graph structure, <cit.> independently perturbs node features and graph structure, thereby extending partial-input baselines <cit.> to the graph domain.
Benchmarks on which it is possible to achieve high accuracy without certain node features or parts of the graph structure (called “reduced benchmarks”) are considered “easy” <cit.>; this is because reduced benchmarks suggest that a combination of node features and graph structure is not necessary to do well. However, for tasks where both features and structures are actually necessary, doing well on corresponding “reduced benchmarks” implies the existence of certain spurious correlations. Consider natural language inference (NLI) (i.e., does a premise entail a hypothesis?): both the premise and hypothesis should ideally be necessary to solve the task, so model success on premise-only or hypothesis-only baselines would suggest that the benchmark has poor discriminant validity <cit.>. That is, it is unclear to what extent the benchmark measures the ability of a model to “cheat” using spurious correlations vs. actually perform the desired reasoning. But, just because a model performs well without graph structure, this doesn't entail that the model doesn't leverage graph structure when it is provided <cit.>.
In contrast to “easy” benchmarks, benchmarks on which models fail without certain node features or parts of the graph structure are considered “difficult.” However, failures on the partial-input baselines proposed by <cit.> do not indicate that models are not leveraging other spurious correlations (perhaps simple combinations of node features and graph structure). There are also infinitely many possible partial-input baselines. Consider the task of counting the number of size-4 maximal cliques comprising only purple nodes in a graph, as well as a benchmark for this task which has only size-4 maximal cliques. Furthermore, consider a GNN that can only count size-3 maximal cliques, i.e., triangles (but not 4-cliques or other larger cliques). With node color information (but without graph structure), the model may output the total number of purple nodes divided by 4 and perform poorly. With graph structure (but without node color information), the model may output 4 times the number of 3-cliques in the graph and perform poorly. However, with both node color information and graph structure, the model may output 4 times the number of 3-cliques containing only purple nodes and achieve perfect performance on the benchmark, thereby deceiving us into thinking that it has learned to perform the task.
§ AUDITED GRAPH ML BENCHMARKS
§.§ Benchmarks used in <ref>
Our usage of these benchmarks complies with their license (where applicable); not all the benchmarks have a license. While these benchmarks are widely used, we did not obtain explicit consent from any data subjects whose data the benchmarks may contain. To the best of our knowledge (via manual sampling and inspection), the benchmarks do not contain any personally identifiable information or offensive content.
§.§ Benchmarks used in <ref>
The Credit network consists of 30,000 nodes representing individuals, with edges between them indicating similar spending and payment patterns. Each node has 13 features (e.g., education, credit history), with an average degree of 95.79 ± 85.88. The corresponding task is to predict whether an individual will default on their credit card payment, and the sensitive groups are those 25 years old or younger and those above the age of 25. German network comprises 1,000 nodes representing clients in a German bank who are connected if they have similar credit accounts. Each node has 27 features (e.g., loan amount, account-related features), with an average degree of 44.48 ± 26.51. The corresponding task is to predict whether a client has good or bad credit risk, and the groups are men and women. For both benchmarks, we do not include group membership as a feature.
To the best of our knowledge (via manual sampling and inspection), neither benchmark contains personally identifiable information or offensive content. We use <cit.>'s data and data loading code[<https://github.com/chirag126/nifty>] in accordance with its MIT license. We refrain from using the Recidivism network from <cit.> so as not to support the development of carceral technology.
§ NON-DISTINGUISHABLE GRAPH PAIRS FROM MUTAG
§ ADJUSTED MUTUAL INFORMATION (AMI)
§ GIN EXPERIMENTAL SETTINGS
Since GIN relies on node features, for IMDB-BINARY and IMDB-MULTI, we set the features as a one-hot encoding of the node degrees, and for REDDIT-BINARY, we set the features as a constant. For the encoder, we use 4 layers, 64 hidden units per layer, and global mean pooling; in each layer, ϵ is trainable and h_Θ is a 2-layer, 64-unit MLP with ReLU nonlinearities and BatchNorm. For the decoder, we use 2 layers, 64 hidden units per layer, and dropout with a ratio of 0.5. For training, we employ 100 epochs, a minibatch size of 128, a learning rate (LR) of 0.01, a LR decay factor of 0.5, a LR decay step size of 50, and no weight decay. We train on a random 90% fold, and achieve comparable test accuracies to <cit.>. Error bars are computed over 10 training runs with different random seeds. We train GIN on a single NVIDIA GeForce GTX Titan Xp Graphic Card with 12196MiB of space. All other experiments are run on a single CPU on an internal server.
§ GIN AND 1-WL ALIGNMENT
§ LIMITATIONS
Survey limitations Our survey sample is skewed towards researchers ([Q6]Q6). A possible reason for this is that expressive power largely remains a subject of theoretical, rather than practical, inquiry. Furthermore, our sample is relatively small; this could be because few practitioners feel qualified to comment on the topic and participants were not compensated. Moreover, our sample is also skewed towards English-speaking practitioners based in the US given that our survey was administered in English and the graph ML community is US-centric. (We did not collect demographic data for our sample.) Nevertheless, our results still highlight that even within skewed samples, there exists weak agreement on how expressive power is conceptualized and differing beliefs about k-WL. We emphasize that our survey results represent a snapshot in time. Practitioners' conceptualizations of expressive power are not static; they evolve over time and are often task-dependent. Furthermore, practitioners' predominant measurement of expressive power via comparison to k-WL has likely homogenized their conceptualizations of expressive power <cit.>.
Theoretical analysis limitations While our theoretical analysis primarily relies on the graphs in Figure <ref>, they may be easily extended to numerous common classes of graphs (e.g., isoregular graphs <cit.>) on which 1-WL is known to fail. Furthermore, our theoretical analysis does not cover the potential for GNNs to approximate isomorphism testing, substructure counting, etc. <cit.>.
Benchmark auditing limitations We do not provide AMI for the node-level benchmarks because their size makes computing it intractable, but we expect similar results. We further do not analyze the alignment of GIN node representations with 1-WL node colorings, as the WL subtree kernel is not applicable to node pairs.
Future work We encourage researchers to run our experiments in <ref> on graph ML benchmarks with even larger graphs (for node-level tasks) and more graphs (for graph-level tasks). We also encourage more work on the trustworthiness of expressive GNNs. As part of this, further efforts must be invested into the construction of graph benchmarks for evaluating trustworthiness; there currently exist limited “natively”-graph real-world datasets with sensitive attributes available (often for privacy reasons) <cit.>, which is evidenced by how none of the social networks in <ref> have node features.
|
http://arxiv.org/abs/2307.05593v2 | 20230710180946 | Quantum Simulation of Lattice QCD with Improved Hamiltonians | [
"Anthony N. Ciavarella"
] | hep-lat | [
"hep-lat",
"nucl-th",
"quant-ph"
] |
IQuS@UW-21-056
[email protected]
InQubator for Quantum Simulation (IQuS), Department of Physics, University of Washington, Seattle, Washington 98195-1550, USA
Quantum simulations of lattice gauge theories are anticipated to directly probe the real time dynamics of QCD, but scale unfavorably with the required truncation of the gauge fields. Improved Hamiltonians are derived to correct for the effects of gauge field truncations on the SU(3) Kogut-Susskind Hamiltonian. It is shown in 1+1D that this enables low chromo-electric field truncations to quantitatively reproduce features of the untruncated theory over a range of couplings and quark masses. In 3+1D, an improved Hamiltonian is derived for lattice QCD with staggered massless fermions. It is shown in the strong coupling limit that the spectrum qualitatively reproduces aspects of two flavor QCD and simulations of a small system are performed on IBM's Perth quantum processor.
Quantum Simulation of Lattice QCD with Improved Hamiltonians
Anthony N. Ciavarella 0000-0003-3918-4110
August 12, 2023
============================================================
§ INTRODUCTION
The real time dynamics of quantum chromodynamics (QCD) are of relevance to a number of phenomena in particle and nuclear physics. These range from collisions of hadrons at high energies to the behavior of quark-gluon plasma in the early universe. The simulation of QCD discretized onto a lattice has enabled non-perturbative calculations of static observables in QCD such as hadron masses and form factors <cit.>.
Quantum computers are expected to be able to directly probe the real time dynamics of quantum field theories. The recent developments in quantum hardware have inspired studies into how to implement simulations of lattice gauge gauge theories on quantum computers. The first quantum simulations of pure non-Abelian lattice gauge theories have been performed in low dimensions on quantum hardware <cit.>. There have also been quantum simulations of non-Abelian gauge theories coupled to matter in one spatial dimension <cit.>. Theoretical studies have been performed into how to scale up these calculations to larger systems <cit.> and large scale simulations have been performed of Abelian gauge theories <cit.>. However, all these approaches to simulating gauge theories require the gauge field to be truncated and scale poorly with the gauge field truncation. Similar problems were found in the classical simulation of lattice gauge theories with the scaling of errors with lattice spacing. These problems were mitigated through the development of improved Symanzik actions with more favorable scaling of errors with lattice spacing <cit.>. It is expected that improved Hamiltonians can be found that mitigate the effects of truncating the gauge field as well.
In this work, improved Hamiltonians are derived for lattice gauge theories through the application of the similarity renormalization group (SRG). SU(3) gauge fields coupled to fermions in 1+1D are used as a case study for the improved Hamiltonians studied. Tensor network simulations are used to demonstrate that the improved Hamiltonians derived in 1+1D correctly reproduce observables on large lattices. An improved Hamiltonian for lattice QCD with two flavors is derived for 3+1D and a small simulation is performed on IBM's quantum processors.
§ 1+1D
§.§ 1+1D Hamiltonian
Gauge theories in one spatial dimension have been used as toy models to study the quantum simulation of gauge theories in higher dimensions as they share many qualitative features and their reduced complexity makes simulation more tractable. Previous simulations on quantum hardware have studied the dynamics of hadrons in one spatial dimension <cit.> and β decay <cit.>.
In this work, the SU(3) Kogut Susskind Hamiltonian <cit.> with a single flavor of staggered fermions in 1+1D will be used as a toy model to study the effects of gauge field truncation and the performance of improved Hamiltonians. The Hamiltonian describing this theory is
Ĥ = Ĥ_Kin + Ĥ_m + Ĥ_E
Ĥ_Kin = ∑_x,a,b1/2ψ̂_x,a^†Û^a,b_x,x+1ψ̂_x+1,b + h.c.
Ĥ_m = m ∑_x,a (-1)^x ψ̂_x,a^†ψ̂_x,a
Ĥ_E = ∑_x,cg^2/2Ê_x,x+1^cÊ_x,x+1^c ,
where g is the gauge coupling, m is the fermion mass, ψ̂_x,a is the fermion field at site x with color a, Û^a,b_x,x+1 is the parallel transporter on the link between the sites x,x+1 and Ê_x,x+1^c is the chromo-electric field operator. By working with open boundary conditions in the axial gauge, and enforcing Gauss's law, the gauge fields in this theory can be completely integrated out yielding the Hamiltonian
Ĥ = Ĥ_Kin + Ĥ_m + Ĥ_E
Ĥ_Kin = ∑_x,a1/2ψ̂_x,a^†ψ̂_x+1,a + h.c.
Ĥ_m = m ∑_x,a (-1)^x ψ̂_x,a^†ψ̂_x,a
Ĥ_E = ∑_x,cg^2/2(∑_y<xQ̂_y^c) (∑_y<xQ̂_y^c) ,
where Q̂_y^c is the chromo-electric charge at site x defined by
Q̂_y^c = ∑_a,bψ̂^†_y,a T^c_a,bψ̂_y,b ,
where T^c_a,b are the Gell-Mann matrices. By working with this Hamiltonian, we can directly study the untruncated theory and the performance of improved Hamiltonians that correct for the gauge field truncation.
§.§ Strong Coupling Expansion m=0
Before the Hamiltonian in Eq. (<ref>) can be mapped onto a quantum computer, it must first be truncated to a finite Hilbert space. Typically, this is done by working in the basis of the chromo-electric field and truncating the field below some cutoff. It has been shown numerically for some small systems <cit.> and rigorously proven in general <cit.> that the error induced by this truncation falls off exponentially with the truncation. The error due to gauge field truncation can be reduced even further by first performing a unitary rotation on the Hamiltonian to reduce the coupling to the higher electric field states and then truncating. In other words, there is a low-energy subspace coupled to a high-energy subspace and one would like to derive an effective field theory description of the low-energy subspace with the high-energy subspace decoupled. Previous work has explored how to perform this decoupling variationally <cit.>. One alternative method to construct such an effective Hamiltonian is Schrieffer-Wolff perturbation theory which systematically constructs approximate unitary transformations that decouple the high-energy subspace <cit.>.
As an example, we will consider the Hamiltonian in Eq. (<ref>) on two staggered sites (one physical site) with massless fermions, truncated at zero electric field. This is the harshest possible truncation that can be applied, and the only physical states left in the Hilbert space are those where sites are unoccupied or have three fermions present forming a color singlet, i.e., a baryon. At this truncation, the Hamiltonian in Eq. (<ref>) is trivial, and there are no dynamics. The states kept in this truncation span the zero electric energy subspace while all states with higher electric energy are being discarded. Using the Schrieffer-Wolff perturbation theory, an effective Hamiltonian for the zero electric energy subspace at leading order is given by
Ĥ_eff = ∑_x9/16g^2Ẑ_xẐ_x+1 + 27/32g^4(X̂_x X̂_x+1 + Ŷ_x Ŷ_x+1)
+ 𝒪(g^-6) ,
where X̂_x, Ŷ_x, Ẑ_x are the corresponding Pauli matrices at site x on the lattice. In this basis, spin up states correspond to a site being unoccupied and spin down states correspond to a baryon being present on the site. The details of this derivation and how to systematically derive higher order terms are in Appendix <ref>. In this context, the Schrieffer-Wolff expansion corresponds to performing a strong coupling expansion around the zero electric energy subspace. Note that similar results have been derived for SU(2) lattice gauge theories and the Schwinger model with multiple flavors, showing that they are equivalent to spin systems in the strong coupling limit <cit.>.
The effective Hamiltonian in Eq. (<ref>) requires only a single qubit per site to be mapped onto a quantum computer. The Hamiltonian in Eq. (<ref>) with gauge fields integrated out requires three qubits per site to represent the state of the system. By using this effective Hamiltonian to describe a subspace of the system, the computational resources required are reduced. However, the Schrieffer-Wolff expansion is known to have a finite radius of convergence <cit.>, so this effective Hamiltonian should only be valid over a limited range of couplings. The energy gap for the effective Hamiltonians obtained at different orders in the Schrieffer-Wolff expansion over a range of couplings are shown in Fig. <ref>. Note that both the ground state and first excited state are in the baryon number zero sector. As this figure shows, the effective Hamiltonians obtained through the Schrieffer-Wolff expansion are only valid for strong couplings, and the expansion fails to converge at weak couplings.
§.§ Similarity Renormalization Group m=0
The strong coupling expansion in the previous section was able to yield an improved Hamiltonian to correct for the chromo-electric field truncation for a small system. However, the performance of the improved Hamiltonian was limited by the convergence of the strong coupling expansion. An alternative approach to derive an improved Hamiltonian is the SRG. This method works by choosing a generator of unitary rotations that should decouple the high energy subspace and then continuously flowing to decouple the high energy subspace <cit.>. Explicitly the Hamiltonian being flowed is parametrized as
Ĥ_s = Ĥ_Λ + V̂_s ,
where Ĥ_Λ determines the energy scales that should be decoupled, V̂_s is the remaining terms in the Hamiltonian and s is the flow parameter. The generator of the SRG flow is traditionally taken to be
η̂_s = [Ĥ_Λ,Ĥ_s] .
The evolution of the Hamiltonian under SRG is given by
dĤ_s/ds = dV̂_s/ds = [[Ĥ_Λ,V̂_s],Ĥ_̂ŝ]
= [[Ĥ_Λ,V̂_s], Ĥ_Λ] + [[Ĥ_Λ,V̂_s],V̂_s] .
By flowing to s→∞, the low and high energy sectors will be decoupled.
The similarity renormalization group has previously been used in low energy nuclear physics to derive low energy nuclear potentials with improved convergence properties <cit.>. In the following sections, it will be shown how the SRG can be used to derive improved Hamiltonians that correct for the effects of gauge field truncation.
§.§.§ Two Staggered Sites
Once again, the Hamiltonian in Eq. (<ref>) on two staggered sites (one physical site), truncated at zero electric field will be used as an example to construct an improved Hamiltonian. The generator of the SRG flow will be chosen to decouple states with different electric energies, i.e. Ĥ_Λ = Ĥ_E. The SRG equations can then be solved to recover an improved Hamiltonian of the form
Ĥ_SRG = A(g)(X̂_1 X̂_2 + Ŷ_1 Ŷ_2) + B(g) Ẑ_1Ẑ_2 ,
where A(g) and B(g) are constants computed numerically. Note that this Hamiltonian takes the same form as that derived in the strong coupling expansion in Eq. (<ref>) except now the coefficients multiplying the operators have been determined through SRG instead of a perturbative expansion. The energy gap for this Hamiltonian as a function of the coupling is shown in Fig. <ref>. Unlike the improved Hamiltonian obtained through the strong coupling expansion, the improved Hamiltonian obtained through the SRG suffers from no convergence issues and is able to correctly reproduce the energy gap at all values of the coupling.
§.§.§ Larger Systems
As shown in the previous section, the SRG was capable of producing an improved Hamiltonian that correctly describes the physics of a small system. In practice, improved Hamiltonians will be needed for larger systems. The setup of the SRG used in the previous section does not scale efficiently to larger lattices. This is because as the SRG evolves, the number of operators generated can be exponential in the system size. This can be mitigated through the use of the in-medium similarity renormalization group (IMSRG) which truncates operators in the SRG flow above a certain weight <cit.>. The cost of performing the IMSRG scales exponentially with the size truncation. However the convergence with operator size is also exponential due to the exponential decay of correlations in low energy states.
As an explicit example, improved Hamiltonians for the zero electric field truncation will be derived with IMSRG. The smallest nontrivial operator size truncation is at two staggered sites. The improved Hamiltonian derived with IMSRG at this truncation with coupling g on L staggered sites is
Ĥ_SRG = ∑_x < L A(g)(X̂_x X̂_x+1 + Ŷ_x Ŷ_x+1)
+ B(g) Ẑ_xẐ_x+1 .
The accuracy of the improved Hamiltonians derived through IMSRG at this electric field truncation can be improved by computing the IMSRG flow for larger operator size truncations. In general, one would expect this method to work well when the operator size truncation used is comparable to the correlation length of the system in question. Explicitly, the form of the improved Hamiltonians obtained by truncating at operators defined on three staggered sites takes the form
Ĥ_3,SRG = ∑_x A_1(g) (X̂_x X̂_x+1 + Ŷ_x Ŷ_x+1) + B_1(g) Ẑ_xẐ_x+1
+ A_2(g) (X̂_x X̂_x+2 + Ŷ_x Ŷ_x+2) + B_2(g) Ẑ_xẐ_x+2
where A_i(g), and B_i(g) are constants determined from solving the SRG equations numerically. Note that this takes the same form as Eq. (<ref>) just with the inclusion of next to nearest neighbor hopping. The performance of the improved Hamiltonians can be improved further by truncating the operator size at four staggered sites. The improved Hamiltonian obtained at this truncation takes the form
Ĥ_4,SRG = ∑_x A_1(g) (b̂_x b̂^†_x+1 + b̂^†_x b̂_x+1) +B_1(g) Ẑ_x Ẑ_x+1
+ A_2(g) (b̂_x b̂^†_x+2 + b̂^†_x b̂_x+2) + B_2(g) Ẑ_x Ẑ_x+2
+ A_3(g) (b̂_x b̂^†_x+3 + b̂^†_x b̂_x+3) + B_3(g) Ẑ_x Ẑ_x+3
+ C_1(g) (b̂_x b̂^†_x+1 + b̂^†_x b̂_x+1) Ẑ_x+2Ẑ_x+3
+ C_2(g) (b̂_x b̂^†_x+2 + b̂^†_x b̂_x+1) Ẑ_x+1Ẑ_x+3
+ C_2(g) (b̂_x+1b̂^†_x+3 + b̂^†_x+1b̂_x+3) Ẑ_xẐ_x+2
+ C_3(g) (b̂_x b̂^†_x+3 + b̂^†_x b̂_x+1) Ẑ_x+1Ẑ_x+2
+ C_4(g) (b̂_x+1b̂^†_x+2 + b̂^†_x+1b̂_x+2) Ẑ_xẐ_x+3
+ C_5(g) Ẑ_xẐ_x+1Ẑ_x+2Ẑ_x+3
+ D_1(g) (b^†_x b^†_x+1 b_x+2 b_x+3 + b_x b_x+1 b^†_x+2 b^†_x+3)
+ D_2(g) (b^†_x b_x+1 b_x+2^† b_x+3 + b_x b^†_x+1 b_x+2 b^†_x+3)
+ D_3(g) (b^†_x b_x+1 b_x+2 b_x+3^† + b_x b^†_x+1 b^†_x+2 b_x+3)
where b̂_x = 1/2(X̂_x + i Ŷ_x) is a qubit annihilation operator at site x and A_i(g), B_i(g), C_i(g), and D_i(g) are constants determined from solving the SRG equations numerically.
To test the performance of the improved Hamiltonians derived through SRG, density matrix renormalization group (DMRG) calculations were performed using the C++ iTensor library <cit.> to obtain the vacuum state and the single baryon ground state of the Hamiltonian in Eq. (<ref>) and the improved Hamiltonians described above for lattices with up to fifteen physical sites with open boundary conditions. Fig. <ref> shows the mass of the baryon (difference of the energy of the single baryon state and vacuum state) for the full Hamiltonian and the improved Hamiltonians for the zero electric field truncation for g=2. As this figure shows, the relative error in the baryon mass computed with the improved Hamiltonians grows with system size and then saturates. By using improved Hamiltonians with a larger operator size truncation in the IMSRG, the relative error in the baryon mass can be reduced down to the percent level. The baryon mass for g=1 was also computed and is shown in Fig. <ref>. At this weaker coupling, the correlation length is longer and the relative error in the baryon mass grows uncontrollably with the lattice size for the improved Hamiltonian obtained by the two staggered site truncation IMSRG. However, increasing the size of the operator truncation used in the IMSRG decreases the error in the baryon mass to controllable levels.
In addition to studying the energy of different states on the lattice, the IMSRG flows of operators can be computed and their expectation values can be computed using improved Hamiltonians. As an explicit example, the SRG flow of the chromo-electric energy density was computed. The operators corresponding to the chromo-electric operators in the improved basis are the same as those that show up in the improved Hamiltonians, just with different coefficients. The vacuum expectation of the chromo-electric energy density is shown in Fig. <ref> for g=1 and g=2. As before, increasing the size of the operator truncation in the IMSRG improves the accuracy of the improved Hamiltonians. Remarkably, even though the improved Hamiltonians are being truncated at zero electric field, their ground states still reproduce the electric energy density of the full untruncated theory.
§.§ Similarity Renormalization Group m ≠ 0
In the previous section, IMSRG was used to derive an improved Hamiltonian that describes the dynamics of baryons in QCD in one dimension with massless quarks. The same technique can be used to setup improved Hamiltonians in the case of massive quarks as well.
In a theory with massive quarks, the piece of the Hamiltonian that should be used to generate the SRG flow is the combination of the mass and electric energy terms. At the zero electric energy truncation, the only state left after truncation is the one with matter sites empty and anti-matter sites filled. Therefore with massive quarks, there are no dynamics at this level of truncation. The next lowest truncation in the SRG flow depends on the relative size of the fermion mass m and the coupling g. If 2/3g^2 > m, then the next lowest lying state in the spectrum consists of a baryon at a site. The improved Hamiltonian derived by truncating at this level takes the same form as in the previous section except with the addition of a mass term for the baryons. If instead 2/3g^2 < m, then the next lowest lying state in the spectrum corresponds to a quark anti-quark pair connected by a link of electric flux. In the strong coupling limit, this corresponds to a meson at the excited link.
Denoting the trivial vacuum state by |Vac⟩, and the state with a qq pair on link l by |l⟩, the Hamiltonian obtained under IMSRG flow truncating the energy at single link excitations and the operator size at two link operators takes the form
Ĥ_SRG = E_0(g,m) |Vac⟩⟨Vac|
+ ∑_l h(g,m)(|l+1⟩⟨l| + |l⟩⟨l+1|) + E_1(g,m) |l⟩⟨l| ,
where E_0(g,m), E_1(g,m), and h(g,m) are constants determined through numerically solving the SRG flow. Note that this Hamiltonian has the same form as that of a single non-relativistic particle. The Hamiltonian in Eq. (<ref>) can be viewed as a Hamiltonian for a single link excitation (or meson) and can be mapped onto a second quantized Hamiltonian to describe a system with more excited links. Explicitly, the single excitation sector of
Ĥ_SRG = ∑_l h(g,m)/2(X̂_l X̂_l+1 + Ŷ_l Ŷ_l+1)
+ E_0(g,m) - E_1(g,m)/2Ẑ_l ,
will be identical to the Hamiltonian in Eq. (<ref>). This improved Hamiltonian will also be capable of describing states with multiple links excited as well. The description of these states with multiple links excited can be improved by raising the truncation of states kept after SRG flow to include states where two links are excited. By keeping these states after the SRG flow and keeping the other truncations as before, the improved Hamiltonian given by
Ĥ_SRG2 = ∑_l h(g,m)/2(X̂_l X̂_l+1 + Ŷ_l Ŷ_l+1)
+ s(g,m)Ẑ_l Ẑ_l+1 + E_0(g,m) - E_1(g,m)/2Ẑ_l ,
will have single and two excitation sectors that match the improved Hamiltonians derived through SRG. As a test of the performance of this improved Hamiltonian, the mass of the meson was computed on a lattice with two physical sites for g=1 and various values of m in Fig. <ref>. Similar to the massless case, the improved Hamiltonian derived with the SRG performs well when there is a large separation in energy scales between the states being decoupled. Note that in principle, the same comparison can be done with larger lattices, however the meson is in the same baryon number sector as the vacuum which complicates the calculation of the meson mass. It is expected that this improved Hamiltonian scales to larger lattices as in the massless case.
§.§.§ Quantum Simulation
As an example of how these improved Hamiltonians can be used for quantum simulation, a simulation will be performed of a meson's time evolution on three physical sites with open boundary conditions. Using the Hamiltonian in Eq. (<ref>) would require a quantum computer with 18 qubits to encode the state, and non-local interactions between the qubits to implement the electric energy piece of the Hamiltonian. Using the improved Hamiltonian in Eq. (<ref>) requires only 5 qubits to represent the state and only requires nearest neighbor interactions on the quantum computer to perform time evolution.
Fig. <ref> shows the real time evolution of a single meson on three physical sites with g=1,m=1 simulated on IBM's Perth quantum processor <cit.>. A meson state was prepared on the quantum processor by applying an X̂ gate to the qubit assigned to the leftmost link. Time evolution was performed using a first order Trotter formula. Explicitly, the Hamiltonian was decomposed as Ĥ=∑_l=1^4Ĥ_l where
Ĥ_l = h(g,m)/2(X̂_l X̂_l+1 + Ŷ_l Ŷ_l+1) + s(g,m)Ẑ_l Ẑ_l+1 ,
and the Trotterized time evolution operator was given by
Û(Δ t) = e^-iĤ_2 Δ t e^-iĤ_4 Δ t e^-iĤ_3 Δ t e^-iĤ_1 Δ t .
Each individual e^-iĤ_l Δ t was decomposed into a circuit with 3 CNOT gates using standard techniques <cit.>. The sum over Pauli Ẑ operators can be ignored when performing time evolution because it commutes with the full Hamiltonian and the operators being measured. The noise in the quantum simulation was mitigated using self-mitigation combined with Pauli twirling <cit.>. For each Trotter step, 50 circuits describing the time evolution were used along with 50 circuits with Δ t = 0 used to determine the strength of the depolarizing noise channel. Each circuit was sampled 10,000 times. As Fig. <ref> shows, the quantum hardware is able to describe the time evolution well at short times, but at long times the hardware noise begins to dominate. However, despite the presence of hardware noise at late times, the location of the peak of the wavepacket of the meson can still be located at late times.
§ 3+1D
§.§ 3+1D Hamiltonian
Performing a quantum simulation of lattice QCD requires a choice of Hamiltonian to be used. This choice is complicated by the phenomena of fermion doubling, where the naive discretization of the Dirac field on the lattice in d dimensions actually describes 2^d fermions. Furthermore, the Nielson-Ninomiya theorem forbids the presence of chiral symmetry on the lattice when all doublers are removed <cit.>. In this work, staggered fermions will be used. Staggered fermions work by distributing the components of the Dirac field across different sites of the lattice. This preserves some chiral symmetry at the cost of still having some fermion doublers remain. In lattice QCD calculations on classical computers, space and time are both discretized leading to staggered fermions describing 4 types of fermions, referred to as tastes in the literature. For practical calculations, these can be reduced to a single flavor through the process of rooting <cit.>. In quantum simulation, time is left continuous and only space is discretized. This changes the counting of the number of tastes present. Explicitly, with three dimensions of space discretized and time left continuous, staggered fermions describe two tastes. This is a feature, not a bug for using lattice QCD to study nuclear physics as one taste can be identified as an up quark and the other can be identified as a down quark. Therefore, we would expect lattice QCD with a single staggered fermion on a quantum computer to describe two flavor QCD where both quarks have the same mass. With massless quarks, this lattice regularization should reproduce the predictions of chiral perturbation theory as the continuum limit is approached. Explicitly, the Hamiltonian that should be used for 3+1 dimensional two flavor massless lattice QCD on a quantum computer is
Ĥ = Ĥ_K + Ĥ_E + Ĥ_B
Ĥ_K = ∑_r,μ̂,a,bη_r,μ̂1/2ψ̂_r,a^†Û^a,b_r,r+μ̂ψ̂_r+μ̂,b+h.c.
Ĥ_E = g^2/2∑_l ∈links,cÊ_l^c Ê_l^c
Ĥ_B = -1/2g^2∑_p ∈plaquettes_p ,
where ψ_r,a is a fermion field at site r with color a, μ̂ is a unit vector in the x̂, ŷ, or ẑ directions, η_r,μ̂ are the spin diagonalization phases, Û^a,b_r,r+μ̂ is an SU(3) parallel transporter between sites r and r+μ̂, Ê_l^c is the SU(3) chromo-electric field on link l and _p is the Hermitian component of the trace over color indices of the product of parallel transporters on plaquette p. Previous work has shown that this Hamiltonian has a discrete chiral symmetry corresponding to translation by one lattice site that is spontaneously broken and an isospin symmetry that corresponds to diagonal translations <cit.>.
§.§ Improved Hamiltonian
As is the case for 1D QCD, mapping the Hamiltonian in Eq. (<ref>) onto qubits is challenging, especially if one wishes to perform a quantum simulation with existing hardware. Improved Hamiltonians can also be derived for performing quantum simulations of this theory. Following the discussions of the previous sections, IMSRG can be applied to this theory with a truncation in operator size. The smallest non-trivial operator size IMSRG can be applied to is a single link and the lowest electric field truncation that can be used is zero electric field. The resulting improved Hamiltonian on the 3 dimensional lattice will take the same form as in the 1D case except now the hopping terms will have phases that result from the spin diagonalization.
Explicitly, the improved Hamiltonian obtained through SRG at this truncation in operator size and electric field is
Ĥ_SRG = ∑_r A(g) (ψ̂_r^†ψ̂_r+x̂ + ψ̂_r+x̂^†ψ̂_r)
+ A(g) (-1)^r_1(ψ_r^†ψ̂_r+ŷ + ψ̂_r+ŷ^†ψ̂_r)
+ A(g) (-1)^r_1 + r_2(ψ̂_r^†ψ̂_r+ẑ + ψ̂_r+ẑ^†ψ̂_r)
+ B(g) ∑_μ̂(2ψ̂^†_rψ̂_r-1) (2ψ̂^†_r+μ̂ψ̂_r+μ̂-1) ,
where ψ_r is a colorless fermion field at site r and A(g) and B(g) are numerical constants determined through solving the SRG equations.
Note that this improved Hamiltonian only describes the QCD Hamiltonian accurately for large coupling g. At large coupling, the π meson is massive and is integrated out of this improved Hamiltonian. By increasing the chromo-electric field truncation of states kept after the SRG flow, states with quark-antiquark pairs separated by a link will be included in the low energy Hilbert space kept after truncation and will yield an improved Hamiltonian that describes meson degrees of freedom as well.
§.§.§ Spectrum
The improved Hamiltonian in Eq. (<ref>) will describe the untruncated theory accurately in the limit of large g. While the continuum limit of lattice QCD is in the limit of g→0, large couplings can be used to study the theory at finite lattice spacing. In the limit g→∞, A(g)→ 0 and some qualitative features of low energy QCD are recovered. In particular, it has been shown that in the strong coupling limit this theory has an isospin symmetry and a spontaneously broken chiral symmetry <cit.>. In addition to the previously studied features of this regularization, the strong coupling limit of this Hamiltonian also reproduces the approximate SU(4) spin flavor symmetry of nuclear physics.
As an example, we will study the improved Hamiltonian in Eq. (<ref>) on a single cube. The fermionic fields will be mapped onto qubits using a Jordan-Wigner encoding. When A(g)=0, the Hamiltonian in Eq. (<ref>) can be rewritten in terms of Pauli matrices as
Ĥ_SRG = 9/16g^2∑_μ̂Ẑ_rẐ_r+μ̂ .
The ground state is in the baryon number B=0 sector and is a degenerate Néel state. For the rest of this discussion, we will only consider the sector that is even under reflection across the ẑ axis. The lowest lying excited states in the B=0 sector correspond to performing a SWAP operation on one of the links. Denoting the energy cost of flipping one link as Δ=9/8g^2, this set of excited states has energy 4Δ and there are 12 of them. These 12 states should correspond to spin one and spin zero baryon anti-baryon pairs, i.e., pp, nn, np and pn states.
The lowest lying energy states in the B=1 sector correspond to flipping one site from the Néel state on the cube. There are four corners that can be flipped in the Néel state to end up in the B=1 sector so there are four degenerate states with energy 3Δ. These correspond to the two spin modes of the proton and neutron. Note that the proton and neutron mass are degenerate which should be expected from isospin symmetry.
In the B=2 sector, the lowest lying states correspond to flipping two spins in the Néel state. This results in six degenerate states with energy 6Δ. These states correspond to spin 1 pn states and spin 0 pp, pn and nn states. The fact that these states are degenerate is reflective of spin-flavor symmetry which is approximately present in low energy nuclear physics. The spin-flavor symmetry has been shown to emerge in the large N_c limit of QCD <cit.> and is related to the minimization of entanglement in low energy nucleon scattering <cit.>. We also see that in the strong coupling limit, the deuteron has binding energy zero. Similar calculations can be done in the higher baryon number sectors which also show that these sectors also demonstrate spin-flavor symmetry and nuclei with binding energy = 0. It is also interesting to note that the nucleon-nucleon scattering lengths are large. As a result, the pionless EFT describing nucleon scattering is an expansion around a non-trivial fixed point where the binding energy of nuclei vanishes as is the case in this lattice regularization <cit.>.
§.§.§ Quantum Simulation
The Hilbert space describing the Hamiltonian in Eq. (<ref>) consists of a single fermion mode for each site. Using the Jordan-Wigner encoding, the state of each site can be represented with a single qubit. In this encoding, a list of fermion operators ψ_1,ψ̂_2,...,ψ̂_N are mapped onto qubit operators as
ψ̂_n = ⊗_k<n1/2Ẑ_k (X̂_n + i Ŷ_n) .
For a local one dimensional fermionic theory, this fermion encoding leads to a Hamiltonian that is local in qubits. However, in higher dimensions, the operators in the Hamiltonian will include strings of Pauli Ẑ operators that wrap around the lattice. These long range operators are necessary to enforce the anti-commutation relations of the fermionic operators and may make it difficult to practically scale to calculations on a large lattice.
As a demonstration of how this improved Hamiltonian works in practice, time evolution on six vertices connected to a single vertex at the center as shown in Fig. <ref> will be simulated. This is the smallest non-trivial subsystem of a full three dimensional lattice that will be repeated periodically and will be useful for understanding how simulations on a larger lattice will work. Each of the seven vertices can be mapped onto a single qubit. The Hamiltonian describing their time evolution is given by
Ĥ_SRG = ∑_v A(g) (ψ̂_0^†ψ̂_v + ψ̂_v^†ψ̂_0)
+ B(g) ∑_v(2ψ̂^†_vψ̂_v-1) (2ψ̂^†_0ψ̂_0-1) ,
where the 0 subscript denotes the vertex at the center and the sum is over the other vertices. The quantum processor is initialized with the center qubit in the 1 state and the remaining qubits are in the 0 state. In the staggered fermion lattice regularization, sites are alternatively identified with matter and anti-matter degrees of freedom so this state should correspond to the trivial vacuum. By evolving with the Hamiltonian in Eq. (<ref>), it should be possible to observe matter anti-matter fluctuations. Note that with this initial state, a single Trotter step can be performed without having to implement CNOT gates from the Jordan-Wigner strings. A single Trotter step was implemented on IBM Perth with the size of the time step being varied to sample different times. Due to the connectivity of the hardware, this circuit required 28 CNOT gates. Fig. <ref> shows the results of performing a single Trotter step for g=2 on IBM Perth. For small times, the quantum simulation is able to describe the evolution of the system accurately, however beyond t=1, the error in the single Trotter step used is large and limits the accuracy of the quantum simulation.
While the Jordan-Wigner encoding is efficient in the number of qubits used, the Hamiltonian generated has long range interactions which are necessary to preserve the anti-commutation relation of the fermions. Scaling these calculations to a larger lattice will require making use of a more efficient fermion encoding. For example, the Bravyi-Kitaev superfast encoding can be used to map fermions onto qubits <cit.>. In this encoding, a qubit is associated with each link on the lattice and represents the parity of the number of fermions on the link. The length of the strings of Pauli Ẑ operators for an operator on a link extends only to neighboring links. For a large lattice, this will limit the circuit depth necessary to perform time evolution and potentially allow for larger calculations to be performed.
§ DISCUSSION
In this work, the SRG has been used to derive improved Hamiltonians that mitigate the effects of gauge field truncation. It was demonstrated in 1+1D that the improved Hamiltonians derived this way outperform those derived through the strong coupling expansion for small systems. Tensor network calculations were performed to demonstrate that these improved Hamiltonians perform well as the system size is increased. These techniques were also applied to 3+1D giving an improved Hamiltonian capable of describing two flavour QCD on the lattice. Real time dynamics on small systems were simulated on IBM's Perth quantum processor.
Previous strategies for quantum simulation of lattice gauge theories improved accuracy by increasing the truncation of the gauge field. This comes at the cost of needing more qubits to represent the system and a more complicated circuit to implement the time evolution. The improved Hamiltonians introduced in this work are capable of improving accuracy only at the cost of requiring more complicated circuits to simulate.
Improved Hamiltonians have been derived for a single flavor of staggered fermions coupled to SU(3) gauge fields truncated at low electric field. This has enabled quantum simulation of systems that would otherwise be out of reach of current quantum hardware. The same approach introduced here can be used to derive improved Hamiltonians for larger electric field truncations and with more flavors of fermions. Future work will extend these methods to higher spatial dimensions with larger operator truncations where the plaquette terms will modify the SRG flow. This will enable quantum simulations of lattice gauge theories in multiple dimensions to be performed in the near term.
The authors would like to acknowledge useful conversations about SRG with Zhiyao Li on a related project. We would also like to thank Marc Illa and Roland Farrell for feedback in preparing this manuscript. The authors would also like to acknowledge many useful conversations with Martin Savage, Francesco Turro, Xiaojun Yao and Niklas Mueller. The material presented here was funded by U.S. Department of Energy, Office of Science, Office of Nuclear Physics, Inqubator for Quantum Simulation (IQuS)[<https://iqus.uw.edu>] under Award Number DOE (NP) Award DE-SC0020970 via the program on Quantum Horizons: QIS Research and Innovation for Nuclear Science. This work was enabled, in part, by the use of advanced computational, storage and networking infrastructure provided by the Hyak supercomputer system at the University of Washington[<https://itconnect.uw.edu/research/hpc>]. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team.
§ SCHRIEFFER-WOLFF PERTURBATION THEORY
The improved Hamiltonians derived in this work are based on performing a unitary transformation before truncating the electric field to reduce the coupling to the states being removed by the truncation. This can be done perturbatively through the use of Schrieffer-Wolf perturbation theory (SWPT). In this section, the application of SWPT to the Hamiltonian in Eq. (<ref>) with m=0 will be demonstrated. The Hamiltonian for lattice gauge theories in 1D we wish to simulate takes the form
Ĥ = Ĥ_E + Ĥ_D + V̂ ,
where Ĥ_E is the electric Hamiltonian, V̂ couples the low energy subspace to the high energy subspace and Ĥ_D describes dynamics in the high energy Hilbert space. Note that the kinetic term of Eq. (<ref>) is equal to Ĥ_D + V̂. For the zero electric field truncation, V̂ is the piece of the kinetic term that corresponds to a baryon on a site ejecting a quark to a neighboring site and Ĥ_D is the piece of the kinetic term that describes a quark propagating freely between sites. SWPT systematically generates a unitary, e^Ŝ that decouples the selected low energy subspace. For lattice gauge theories, we will be decoupling the electric vacuum and states with low energy relative to the electric Hamiltonian. To leading order we have
e^Ŝ_1Ĥ e^-Ŝ_1 = Ĥ_E + [Ŝ_1,Ĥ_E + Ĥ_D] + Ĥ_D + V̂ + [Ŝ_1,V̂]
+ 1/2[Ŝ_1,[Ŝ_1,Ĥ_E + Ĥ_D] + 𝒪(V̂^3) .
The leading order coupling between the low and high energy subspace comes from V̂ and be cancelled at leading order by choosing Ŝ_1 such that [Ŝ_1,Ĥ_E + Ĥ_D]=-V̂. Explicitly, the matrix elements of Ŝ_1 are
(S_1)_ab = 1/E_a - E_b V_ab ,
where the indices label eigenstates of Ĥ_E + Ĥ_D with eigenvalues E_a. To leading order, the effective Hamiltonian is
Ĥ_eff^1 = Ĥ_E + 1/2[Ŝ_1,V̂] ,
and provided that the low energy subspace has an electric energy of 0, the commutator is equal to
1/2[Ŝ_1,V̂] = -V̂1/Ĥ_E + Ĥ_DV̂ = -∑_n (-Ĥ_E^-1Ĥ_D)^n 1/Ĥ_EV̂ .
Therefore to 𝒪(H_E^-2), the effective Hamiltonian is given by
Ĥ_eff^1 = Ĥ_E - V̂1/Ĥ_EV̂ + V̂1/Ĥ_EĤ_D1/Ĥ_EV̂ + 𝒪(Ĥ_E^-3) .
Plugging in the corresponding pieces of Eq. (<ref>) yields the improved Hamiltonian in Eq. (<ref>). Techniques for performing this expansion to higher orders can be found in Ref. <cit.>.
|
http://arxiv.org/abs/2307.05428v1 | 20230711165012 | Phenomenology of DSR-relativistic in-vacuo dispersion in FLRW spacetime | [
"Giovanni Amelino-Camelia",
"Domenico Frattulillo",
"Giulia Gubitosi",
"Giacomo Rosati",
"Suzana Bedic"
] | gr-qc | [
"gr-qc",
"astro-ph.HE",
"hep-th"
] |
[email protected]
Dipartimento di Fisica Ettore Pancini, Università di Napoli “Federico II”
INFN, Sezione di Napoli, Complesso Univ. Monte S. Angelo, I-80126 Napoli, Italy
[email protected]
Dipartimento di Fisica Ettore Pancini, Università di Napoli “Federico II”
INFN, Sezione di Napoli, Complesso Univ. Monte S. Angelo, I-80126 Napoli, Italy
[email protected]
Dipartimento di Fisica Ettore Pancini, Università di Napoli “Federico II”
INFN, Sezione di Napoli, Complesso Univ. Monte S. Angelo, I-80126 Napoli, Italy
[email protected]
Institute for Theoretical Physics, University of Wrocław, Pl. Maksa Borna 9, Pl–50-204 Wrocław, Poland
ICRANet
, P.le della Repubblica 10, 65100 Pescara, Italy and
ICRA and University of Rome “Sapienza”, Physics Department, P.le
A. Moro 5, 00185 Rome, Italy
Studies of in-vacuo dispersion are the most active area of quantum-gravity phenomenology.
The way in which in-vacuo dispersion produces redshift-dependent corrections to the time of flight of astrophysics particles depends on the model-dependent interplay between Planck-scale effects and spacetime curvature/expansion, and we here derive the most general formula for the leading order redshift-dependent correction to the time of flight for the scenario in which relativistic symmetries are deformed at the Planck scale (DSR).
We find that, contrary to the broken symmetries scenario (LIV), where in principle any arbitrary form of redshift dependence could be allowed, for the DSR scenario only linear combinations of three possible forms of redshift dependence are allowed.
We also discuss some specific combinations of these three terms whose investigation might deserve priority from the quantum-gravity perspective.
Phenomenology of DSR-relativistic in-vacuo dispersion in FLRW spacetime
Suzana Bedic
August 12, 2023
========================================================================
§ INTRODUCTION
The possibility of Planck-scale departures from (local) Lorentz invariance arises in several quantum-gravity proposals (see, e.g., Refs.<cit.> and references therein).
In some scenarios (LIV) relativistic invariance
is broken <cit.>, giving rise to a preferred frame, while in other scenarios (DSR)
relativistic invariance is merely deformed <cit.>, preserving the equivalence of reference frames but requiring a deformation of relativistic laws of transformation
among observers.
We are here concerned with tests of the fate of relativistic symmetries in quantum gravity which are based on
time-of-flight measurements, and when analyzed in a flat spacetime the implications of the broken-symmetry and the deformed-symmetry scenario are indistinguishable: in both
cases one gets a leading correction Δ t to the special-relativistic time of flight which (assuming linear dependence on the energy E of the particle) is
governed by <cit.>
Δ t= ηE/M_Pl T ,
where T is the (time) distance of the source, η is a phenomenological parameter and M_Pl is the Planck scale.
However, matters become more complicated
(and the differences between the
broken-symmetry and the deformed-symmetry scenario become more tangible) if
one takes into account the expansion of spacetime: the interplay between quantum-gravity effects and curvature of spacetime can produce several alternative forms of redshift dependence of the effect.
In the LIV broken-symmetry scenario one has no constraints from symmetries and in principle any arbitrary form of redshift dependence could be allowed; however, LIV-based data analyses all rely on a particular form of redshift dependence motivated heuristically
in Ref. <cit.>
(also see Ref. <cit.>)
which gives the following redshift dependence
Δ t= η E/M_Pl∫_0^z 1+z̅/H(z̅)dz ,
where z is the redshift of the source, related to the scale factor a(t) as z(t) = 1/a(t)-1, H(z) is the Hubble parameter, that for the ΛCDM model is [Ω_Λ, H_0 and Ω_m denote,
respectively, the cosmological constant, the Hubble constant and the matter fraction, for which we take the values given in Ref. <cit.>.] H(z)=H_0√(Ω_Λ + (1+z)^3 Ω_m). In the DSR deformed-symmetry scenario
the possibilities for the interplay between quantum-gravity effects and curvature of spacetime are significantly limited by the requirement that the merging picture should be compatible with (however deformed) relativistic invariance. In a previous study
<cit.>
(also see Ref. <cit.>)
two examples of DSR-compatible forms of redshift dependence were identified.
In the study we are here reporting we establish what is the most general form of redshift dependence allowed by the requirement of DSR compatibility.
We find that, in addition to the two forms of redshift dependence already previously identified <cit.>, there is only a third possible form of redshift dependence. Of course, also linear combinations of these three possible forms of redshift dependence are allowed.
Our analysis starts from considering the simple case of propagation in a de Sitter spacetime, where the possible DSR-relativistic scenarios can be characterized fully in terms of deformations of the symmetries of de Sitter spacetime, which are described by an algebra of 10 generators (spacetime translations, rotations and boosts) <cit.>. Already at this level we are able to identify three different terms that describe the redshift dependence of the time of flight. These results are generalized to the case of particles propagating in a FLRW spacetime through a slicing procedure, whose robustness was well tested already in Refs. <cit.>.
In a final section we describe the phenomenology of some specific combinations of the three DSR-compatible forms of redshift dependence which might be particularly significant from the quantum-gravity perspective.
We use natural units c=ħ=1.
§ DEFORMATION OF THE DE SITTER ALGEBRA OF SYMMETRIES
As announced, our analysis takes off from an investigation of the most general Planck-scale deformation
of the de Sitter algebra. We denote by H the curvature parameter of the de Sitter algebra and we denote by ℓ
(a length scale assumed to be of the order of the Planck length)
the deformation parameter and we shall be satisfied working at leading order
in ℓ.
Working in 1+1 spacetime dimension, we start by characterizing the most general deformation of the mass Casimir.
We ask that the limits for vanishing curvature (H→ 0) and vanishing deformation (ℓ→ 0) are well-defined and in particular that the latter leaves us with the standard de Sitter Casimir. Moreover, we require that the vectorial properties of the generators are accounted for, so that the generalization to higher spatial dimensions does not affect space-rotational invariance.
The most general deformation of the de Sitter Casimir which satisfies these requirements is:
C=E^2-p^2-2HNp+ℓ(α E^3+β Ep^2+2γ HNEp+4μ H^2N^2E) .
Here α,β.γ,μ are dimensionless parameters.
With respect to previous studies <cit.> of Planck-scale deformations of the de Sitter Casimir,
our Eq.(<ref>)
includes two additional terms,
the one parametrized by γ (which however had been considered in Ref. <cit.> in the context of a study of particle kinematics with q-de Sitter Hopf-algebra symmetries) and
the one parametrized by μ.
While the Casimir (<ref>) is general and does not come from a specific quantum gravity model, one can interpret the terms proportional to Hℓ
in the framework of a “quantum group" q-deformation of the Poincaré algebra, where the deformation is triggered by a combination of the curvature scale and the “quantum gravity" scale encoded in the parameter q (see e.g. <cit.>).
These theory implications may well deserve dedicated studies, but we here intend to focus on the issues relevant for phenomenology.
The most general algebra of symmetry generator/charges that leaves the Casimir (<ref>) invariant can be described by the following set of Poisson brackets (see also <cit.>)
{ E,p} =Hp-ℓ HE[(α+γ-σ)p+4μ HN],
{ N,E} =p+HN-ℓ E[(α+β-σ)p+HN(α+γ-σ)],
{ N,p} =E+ℓ/2[(α+2σ)E^2+β p^2+2γ HNp+4μ H^2N^2] .
These define a deformation of the standard de Sitter algebra.
Notice that, in addition to the parameters characterizing deformations of the Casimir, the algebra admits the additional numerical parameter σ.
So at this kinematical level, departures from the standard relativistic symmetries are characterized by five independent parameters. However, as we shall here show, the implications for time-of-flight measurements only involve three independent combinations of these 5 parameters.
§ TIME-DELAYS FOR DSR-FLRW SCENARIOS IN THE MOST GENERAL CASE
In this section we use the description of deformed de Sitter symmetries given in the previous section in order to derive the time-delay formulas which are here of interest. Those formulas are then generalized to the FLRW case using, as announced, the "slicing" technique.
§.§ Time delays for deformed de Sitter spacetimes
In deriving our time-delay results, we must keep safe from possible relativistic artifacts due to the relativity of locality
<cit.> which is present in this deformed-relativistic scenarios. We accomplish that by relying on two observers, one local to the emission event and the other one local to the detection event.
Following the strategy of analysis outlined in <cit.>, we perform a finite translation that allows us to express the coordinates of an observer at the detector (t^B,x^B) in terms of the coordinates of an observer at the source (t^A,x^A), defined by the prescription
(t^B,x^B)=e^-ξ p▹ e^-ζ E▹ (t^A,x^A),
where ▹ stands for the action by Poisson bracket of the corresponding generators[For a generator G with parameter a, the finite action on a coordinate x is e^aG▹ x ≡∑_n=0^∞a^n/n!{ G,x}_n, where { G,x }_n = { G, { G,x }_n-1}, { G, x }_0 = x.
In this formalism, the composed action of a spatial translation followed by a time translation is given by e^-ξ p▹ e^-ζE▹ x.] and ξ and ζ are respectively the space and time translation parameters.
We then find that two photons with energy difference Δ E at the detector, emitted simultaneously by a distant source, reach the detector with a time difference
Δ t=ℓΔ E((β-γ+σ+μ)e^2HT-1/2H+(α+γ-σ-2μ)T+μ1-e^-2HT/2H),
where T is the comoving (time) distance between the source and the detector.
In terms of the redshift of the source z=e^HT-1 this reads
Δ t=ℓΔ E/H((β-γ+σ+μ)(z+z^2/2)+(α+γ-σ-2μ)ln(1+z)+μ(z+z^2/2/1+2z+z^2)).
As announced, we are finding that the five numerical parameters that characterize the deformation of the kinematics in Eqs. (<ref>)-(<ref>) combine to produce only three different terms characterizing the functional dependence of the time delay on the redshift.
In particular, of the two terms in (<ref>) that were not considered in <cit.>, the one parameterized by γ does not add to the time delay formula any new functional dependence on the redshift with respect to what was already considered in <cit.>. On the other hand, the new term in
the dispersion relation parameterized by μ produces a functional dependence on the redshift that was not considered before.
§.§ DSR-FLRW time delays
Generalization of the results of the previous subsection to the case of an expanding FLRW universe relies on the slicing technique developed in <cit.>. Propagation of signals in a FLRW spacetime with deformed local Lorentz symmetries is described by defining a sequence of intermediate observers along the particle's trajectory, such that each observer is local to the particle at a given spacetime point. Propagation of signal between two such nearby observers is described by using the deformed de Sitter kinematics as done in the previous subsection and contributes to the time delay by an amount given by (<ref>). The full trajectory (and the corresponding total time delay) in the deformed FLRW spacetime is reconstructed by suitably matching <cit.> the observations made by subsequent observers and considering a limiting procedure in which the number of intermediate observers is sent to infinity, while decreasing their distance to zero <cit.>.
Following this procedure we find that from our Eq.(<ref>) the time delay in the deformed FLRW case reads:
Δ t = Δ E/M_Pl∫_0^zdz(1+z)/H(z)[η_1+η_2(1-(1-H(z)/1+z̅∫_0^zdz'/H(z'))^2)+η_3(1-(1-H(z)/1+z̅∫_0^zdz'/H(z'))^4)] .
As we anticipated, the time delay depends on only three numerical parameters η_1, η_2, and η_3 that are to be determined by experiments.
Their relation with the parameters of the deformed algebra introduced in Sec. <ref> is given by
η_1= (α+β) , η_2= (-α-γ+σ+2μ) , η_3=-μ .
Of course, for the case in which the
H(z) is actually redshift independent the FLRW picture turns into a de Sitter picture and our result
(<ref>) reproduces our result (<ref>).
We find that the parametrization in terms of η_1, η_2, and η_3 turns out to be convenient for the comparison of different phenomenological scenarios.
In particular, when η_2=η_3=0, we are left with the term parametrized by η_1 which gives the same time delay that was obtained in the LIV scenario by Jacob and Piran in <cit.>.
On the other hand, scenarios with vanishing η_1 or η_2 characterize, respectively, two noteworthy cases that we shall discuss in the following:
when η_1 vanishes one obtains curvature-induced scenarios (see Sec. <ref>), while vanishing η_2 relates to theoretical models where energies add up trivially (see Sec. <ref>).
§ SOME NOTEWORTHY SPECIAL CASES OF DSR-FLRW TIME DELAY
We have shown that the requirement of compatibility with DSR-relativistic invariance limits to combinations of only 3 independent forms of redshift dependence
for our time-delay phenomenology. In contrast the case in which relativistic invariance is broken in principle allows an infinity of such forms of redshift dependence (since relativistic invariance imposes no constraints when it is broken).
Still, even just a 3-parameter formula is a rather wide "hunting field" for the phenomenology of time delays in astrophysics, where data are scarce and often of poor quality. In this section we attempt to motivate from the theoretical perspective some specific choices of the parameters η_1, η_2, and η_3 in (<ref>) which might deserve being the "first targets" for the phenomenology.
§.§ Curvature-induced scenarios
A scenario that so far received little attention in the literature, but that could have interesting phenomenological implications, is the one where the quantum gravity effects are triggered by spacetime curvature. This is a scenario where the interplay between curvature effects and Planck-scale effects produces results that are the most distant from what one would guess based on analysing Planck-scale effects in the flat-spacetime approximation. Indeed, in this scenario in-vacuo dispersion occurs only in combination with spacetime curvature/expansion so that when curvature is negligible there is no expected time delay.
Theoretically, these scenarios find motivation in some studies based on a Hopf-algebra description of the symmetries of quantum spacetime <cit.>, as well as in some considerations arising from loop-quantum-gravity research <cit.>.
A first study of these “curvature-induced" scenarios was performed in <cit.>, relying on a toy model where relativistic symmetries are broken.
The preliminary results reported in <cit.>, confronting the slow onset of the quantum gravity effects in the FLRW time-delays, that are typically expected in curvature-induced scenarios, with data relative to gamma-ray-burst observations, showed how these features might have interesting implications for experimental studies.
Here we show that there is a choice of parameters that produces a curvature-induced time-delay also in the DSR-FLRW framework we constructed in the previous sections.
As explained in <cit.>, the requirement for having only curvature-induced terms in the time-delay formula amounts to asking that the coefficient of the first-order term in an expansion around z=0 of the expression (<ref>) vanishes.
Indeed, expanding the redshift formula z(t)=1/a(t)-1 for small distances (i.e. small (negative) times t=-T), one gets z(-T)≃ H_0 T, where the Hubble constant is defined as H_0=1/ada/dt|_t=0. It follows that terms linear in z in the expansion of Δ t will be proportional to Δ E/M_Plz/H_0≃Δ E/M_Pl T, and will survive even in the absence of spacetime curvature. Thus, only terms involving powers of z higher than 1 contribute to curvature-induced time-delay effects.
The leading order expansion in terms of the redshift of Eq. (<ref>) gives
Δ t≃Δ E/M_PlH_0( η_1 z + O(z^2)).
Setting to zero the first order term corresponds to imposing the constraint η_1=0 (i.e. α=-β in terms of the kinematical parameters).
Notice also that the same condition is obtained in the DSR-de Sitter case of Section <ref> if one asks that the time delay of Eqs. (<ref>) and (<ref>) vanishes in the limit of vanishing spacetime curvature H. Indeed, considering the limit H→ 0 in (<ref>) (or in (<ref>), noticing that z≃ H T + O(H^2 T^2), we obtain
Δ t = ℓΔ E/H((α+β)z + O(z^2)) = ℓΔ E ((α+β)T + O(H T^2)),
which gives again the condition α=-β, i.e. η_1=0.
By imposing the condition η_1=0 in (<ref>) the time delay expression reduces to
Δ t= Δ E/M_Pl∫_0^zdz(1+z)/H(z)[η_2(1-(1-H(z)/1+z̅∫_0^zdz'/H(z'))^2)+η_3(1-(1-H(z)/1+z̅∫_0^zdz'/H(z'))^4)] .
This formula, in which only two independent parameters appear, describes the most general curvature-induced in-vacuo dispersion scenario arising from the deformation of symmetries under the assumptions of Secs. <ref> and <ref>.
§.§ Scenarios with undeformed addition of energy
Apart from the deformation of the mass Casimir and the algebra of relativistic symmetry generators, another important ingredient of DSR models concerns the conservation law of energy-momenta for processes involving multiple particles.
In order for the conservation law to be invariant under the deformed symmetries, it must be accordingly deformed <cit.>.
In order to study the possible deformations of the energy-momenta conservation law for the deformed de Sitter scenario described in Sec. <ref>, we consider the total energy and momentum charges resulting from the composition law in the two-particle case.
Keeping the description as general as possible, we consider all the possible terms that can be added to the standard (linear) special relativistic sum law of energy-momenta at leading order in the deformation parameter ℓ. This is found by requiring that the total charges close the same algebra (<ref>) as the single particle energy and momenta (this ensures the relativistic properties of the composition law) and that no deformations terms which involve only one particle charge are present, so that we recover the definition of single particle charge when the charges of the second particle are zero <cit.>. Moreover, we require the same conditions of analyticity, dimensional consistency, and “vectorial properties" adopted for the algebra deformation in Sec. <ref>.
The most general composition laws complying with these requirements is given by the following:
E_tot=E_1+E_2+ℓ((2σ-β-a-b) P_1P_2+(c-γ+σ) H (N_1P_2+P_1N_2)-α E_1E_2+2(c-2μ)H^2N_1N_2)
P_tot=P_1+P_2+ℓ((σ-b) E_1P_2+(σ-a) E_2P_1+cH(N_1E_2+E_1N_2)
N_tot=N_1+N_2+ℓ(aE_1N_2+bE_2N_1) .
Notice that three additional parameters (a,b,c), that didn't appear in (<ref>), are allowed.
While all possibilities contemplated by our Eq.(<ref>) deserve being investigated, we feel that priority should be given to scenarios in which
the addition law of particle energies remains undeformed.
This is suggested by experience <cit.>
with the implications of these modified addition laws
in which one finds that preserving the linearity of addition of energies is advantageous from the point of view of the interpretation of the results.
Moreover this requirement finds further motivation in scenarios where the DSR framework can be associated with a quantum group deformation of de Sitter symmetries, where the summation law of the charges/generators corresponds to a “coproduct rule" of the Hopf-algebra generators. In that case an undeformed summation law of energies would correspond to a “primitive coproduct" for energy/time-translation generators, that is necessary for having a “time-like" q-deformation of de Sitter symmetries <cit.>.
The requirement for the composition of energy to be undeformed imposes the following constraints between the kinematical parameters:
α=0 γ-σ=2μ ,
amounting to η_2=0 in (<ref>).
The expression for time delay in the deformed FLRW scenario then becomes
Δ t=Δ E/M_Pl∫_0^zdz(1+z)/H(z)[η_1+η_3(1-(1-H(z)/1+z̅∫_0^zdz'/H(z'))^4)] ,
in which, again, only two independent parameters appear.
§.§ A one-parameter scenario: curvature-induced and undeformed addition of energy
Combining the requirements of sections <ref> and <ref> we obtain a scenario that has only one free numerical parameter to be determined by experiments, η_3, and is characterized by undeformed composition law of energies and a curvature-induced time delay effect.
The resulting formula for the time delay is
Δ t= η_3 Δ E/M_Pl∫_0^zdz(1+z)/H(z)[1-(1-H(z)/1+z̅∫_0^zdz'/H(z'))^4] .
In Fig. <ref> we compare the redshift dependence described by this formula to the one of the Jacob-Piran ansatz (<ref>).
§.§ Alternative picture with the time delay changing sign
It is rather noteworthy that in our one-parameter scenario which is curvature induced and is compatible with undeformed addition of energy
the time delay changes sign at high redshift.
So far all the scenarios motivated in the literature gave rise to monotonic dependence of the time delay on redshift, and it is interesting that our one-parameter scenario, with its appealing theoretical qualities, is not monotonic. This led us also to investigate how frequently in our 3-dimensional parameter space such changes of the sign of the time delay occur and what sort of functional dependence on redshift are then found in such cases. We found that cases in which the time delay changes sign are not at all exceptional, and a variety of forms of dependence on redshift can be found.
As an illustrative example we focused on the case of effects which are
curvature induced (η_1=0) and with η_2=4,η_3=-3. With this choice, Eq. (<ref>) becomes
Δ t = Δ E/M_Pl∫_0^zdz(1+z)/H(z)[4(1-(1-H(z)/1+z̅∫_0^zdz'/H(z'))^2)-3(1-(1-H(z)/1+z̅∫_0^zdz'/H(z'))^4)] .
As shown in FIG.<ref>,
in this scenario the redshift dependence starts off at small redshifts with opposite sign with respect to the Jacob-Piran ansatz, but then for redshift greater than 1
(up to redshift of about 4.5) approximates reasonably well (oscillating around it) the Jacob-Piran ansatz. It would therefore be a valuable aspect of maturity of this phenomenology when the quality of data at high redshift will prove to be sufficient for discriminating between this scenario and the Jacob-Piran ansatz.
§ CONCLUSIONS
We have derived the most general formula that describes the leading order time delays (assuming linear dependence on the particle energy) for ultra-relativistic particles propagating in an FLRW expanding spacetime with deformed (DSR) relativistic symmetries.
We found that the requirement of relativistic consistency of the DSR scenario allows for only three possible independent forms of redshift dependence (see Eq. (<ref>)).
This is completely different with respect to LIV scenarios, where relativistic symmetries are broken and the lack of relativistic constraints allows in principle any possible form of redshift dependence in the time-delay formula.
Considering the smallness of the Planck length and the rather poor quality of presently-obtainable data, even the exploration of the small three-parameter space of Eq. (<ref>) is a big challenge for phenomenological studies, and initially it might be necessary to focus on some specific choices of our three parameters.
We highlighted in Sec. <ref> some choices of the three parameters which can be motivated by theoretical arguments based on the possible requirement that
the quantum gravity effects are “curvature induced", so that the time-delay vanishes when the spacetime curvature/expansion is negligible, and the possible requirement that the total energy of a multi-particle system should be obtained with a standard linear law of addition of particle energies.
In particular, we found that combining these two possible requirements one specifies completely the redshift dependence of the effects (sec. <ref>).
We believe that a valuable first target for phenomenology would be to discriminate between this particular form of DSR-allowed redshift dependence and the redshift dependence of the Jacob-Piran ansatz.
§ ACKNOWLEDGEMENTS
G.A.-C., D.F. and G.G. acknowledge financial support by the Programme STAR Plus, funded by Federico II University and Compagnia di San Paolo, and by the MIUR, PRIN 2017 grant 20179ZF5KS. G.R.’s work on this project was supported by the National Science Centre grant
2019/33/B/ST2/00050. G.G and G.R. thank Perimeter Institute for hospitality in May 2023, where the final stages of project were completed. Research at Perimeter Institute for Theoretical Physics is supported in part by the Government of Canada through NSERC and by the Province of Ontario through MRI. This work contributes to the European Union COST Action CA18108 Quantum gravity phenomenology in the multi-messenger approach.
50
GACreview
G. Amelino-Camelia,
“Quantum-Spacetime Phenomenology,”
Living Rev. Rel. 16 (2013), 5
[arXiv:0806.0339 [gr-qc]].
MattinglyReview
D. Mattingly,
“Modern tests of Lorentz invariance,”
Living Rev. Rel. 8 (2005), 5
[arXiv:gr-qc/0502097 [gr-qc]].
COSTReview
A. Addazi, J. Alvarez-Muniz, R. Alves Batista, G. Amelino-Camelia, V. Antonelli, M. Arzano, M. Asorey, J. L. Atteia, S. Bahamonde and F. Bajardi, et al.
“Quantum gravity phenomenology at the dawn of the multi-messenger era—A review,”
Prog. Part. Nucl. Phys. 125 (2022), 103948
[arXiv:2111.05659 [hep-ph]].
GRBnature
G. Amelino-Camelia, J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos and S. Sarkar,
“Tests of quantum gravity from observations of gamma-ray bursts,”
Nature 393 (1998), 763-765
[arXiv:astro-ph/9712103 [astro-ph]].
Alfaro:1999wd
J. Alfaro, H. A. Morales-Tecotl and L. F. Urrutia,
Phys. Rev. Lett. 84 (2000), 2318-2321
[arXiv:gr-qc/9909079 [gr-qc]].
Gambini:1998it
R. Gambini and J. Pullin,
Phys. Rev. D 59 (1999), 124021
[arXiv:gr-qc/9809038 [gr-qc]].
Amelino-Camelia:2000stu
G. Amelino-Camelia,
“Relativity in space-times with short distance structure governed by an observer independent (Planckian) length scale,”
Int. J. Mod. Phys. D 11 (2002), 35-60
[arXiv:gr-qc/0012051 [gr-qc]].
Magueijo:2002am
J. Magueijo and L. Smolin,
Phys. Rev. D 67 (2003), 044017
[arXiv:gr-qc/0207085 [gr-qc]].
Kowalski-Glikman:2002iba
J. Kowalski-Glikman and S. Nowak,
Phys. Lett. B 539 (2002), 126-132
[arXiv:hep-th/0203040 [hep-th]].
JacobPiran
U. Jacob and T. Piran,
“Lorentz-violation-induced arrival delays of cosmological particles,”
JCAP 01 (2008), 031
[arXiv:0712.2170 [astro-ph]].
EllisMavroFRWdelay
J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos and A. S. Sakharov,
“Quantum-gravity analysis of gamma-ray bursts using wavelets,”
Astron. Astrophys. 402 (2003), 409-424
[arXiv:astro-ph/0210124 [astro-ph]].
Planck:2018vyg
Aghanim N. et al.
Planck 2018 results. VI. Cosmological parameters.
Astron. Astrophys. 641, A6 (2020),
[erratum: Astron. Astrophys. 652, C4 (2021)].
[arXiv:1807.06209 [astro-ph.CO]].
DSRFRW
G. Rosati, G. Amelino-Camelia, A. Marciano and M. Matassa,
“Planck-scale-modified dispersion relations in FRW spacetime,”
Phys. Rev. D 92 (2015) no.12, 124042
[arXiv:1507.02056 [hep-th]].
Bolmont:2022yad
J. Bolmont, S. Caroff, M. Gaug, A. Gent, A. Jacholkowska, D. Kerszberg, C. Levy, T. Lin, M. Martinez and L. Nogués, et al.
“First Combined Study on Lorentz Invariance Violation from Observations of Energy-dependent Time Delays from Multiple-type Gamma-Ray Sources. I. Motivation, Method Description, and Validation through Simulations of H.E.S.S., MAGIC, and VERITAS Data Sets,”
Astrophys. J. 930 (2022) no.1, 75
[arXiv:2201.02087 [astro-ph.HE]].
DSRdS
G. Amelino-Camelia, A. Marciano, M. Matassa and G. Rosati,
“Deformed Lorentz symmetry and relative locality in a curved/expanding spacetime,”
Phys. Rev. D 86 (2012), 124035
[arXiv:1206.5315 [hep-th]].
Marciano:2010gq
A. Marciano, G. Amelino-Camelia, N. R. Bruno, G. Gubitosi, G. Mandanici and A. Melchiorri,
“Interplay between curvature and Planck-scale effects in astrophysics and cosmology,”
JCAP 06 (2010), 030
[arXiv:1004.1110 [gr-qc]].
Barcaroli:2015eqe
L. Barcaroli and G. Gubitosi,
“Kinematics of particles with quantum-de Sitter-inspired symmetries,”
Phys. Rev. D 93 (2016) no.12, 124063
[arXiv:1512.03462 [gr-qc]].
GACSmolinqdeSitter
G. Amelino-Camelia, L. Smolin and A. Starodubtsev,
“Quantum symmetry, the cosmological constant and Planck scale phenomenology,”
Class. Quant. Grav. 21 (2004), 3095-3110
[arXiv:hep-th/0306134 [hep-th]].
taming
G. Amelino-Camelia, M. Matassa, F. Mercati and G. Rosati,
“Taming Nonlocality in Theories with Planck-Scale Deformed Lorentz Symmetry,”
Phys. Rev. Lett. 106 (2011), 071301
[arXiv:1006.2126 [gr-qc]].
kbob
G. Amelino-Camelia, N. Loret and G. Rosati,
“Speed of particles and a relativity of locality in κ-Minkowski quantum spacetime,”
Phys. Lett. B 700 (2011), 150-156
[arXiv:1102.4637 [hep-th]].
Mignemi:2019yzn
S. Mignemi and G. Rosati,
“Physical velocity of particles in relativistic curved momentum space,”
Mod. Phys. Lett. A 35 (2020) no.22, 2050180
[arXiv:1909.09173 [gr-qc]].
principle
G. Amelino-Camelia, L. Freidel, J. Kowalski-Glikman and L. Smolin,
“The principle of relative locality,”
Phys. Rev. D 84 (2011), 084010
[arXiv:1101.0931 [hep-th]].
Amelino-Camelia:2013uya
G. Amelino-Camelia, L. Barcaroli, G. Gubitosi and N. Loret,
“Dual redshift on Planck-scale-curved momentum spaces,”
Class. Quant. Grav. 30 (2013), 235002
[arXiv:1305.5062 [gr-qc]].
aschieri09
P. Aschieri, A. Borowiec and A. Pacho,
“Dispersion relations in κ-noncommutative cosmology,”
JCAP 04 (2021), 025
[arXiv:2009.01051 [gr-qc]].
bianchiRovelli
E. Bianchi and C. Rovelli,
“A Note on the geometrical interpretation of quantum groups and non-commutative spaces in gravity,”
Phys. Rev. D 84 (2011), 027502.
curvInducedLIV
G. Amelino-Camelia, G. Rosati and S. Bedić,
“Phenomenology of curvature-induced quantum-gravity effects,”
Phys. Lett. B 820 (2021), 136595
[arXiv:2012.07790 [gr-qc]].
Amelino-Camelia:2011gae
G. Amelino-Camelia,
“On the fate of Lorentz symmetry in relative-locality momentum spaces,”
Phys. Rev. D 85 (2012), 084034
[arXiv:1110.5081 [hep-th]].
Amelino-Camelia:2013sba
G. Amelino-Camelia, G. Gubitosi and G. Palmisano,
“Pathways to relativistic curved momentum spaces: de Sitter case study,”
Int. J. Mod. Phys. D 25 (2016) no.02, 1650027
[arXiv:1307.7988 [gr-qc]].
Amelino-Camelia:2023rkg
G. Amelino-Camelia, G. Fabiano and D. Frattulillo,
“Total momentum and other Noether charges for particles interacting in a quantum spacetime,”
[arXiv:2302.08569 [hep-th]].
Ball19943D
A. Ballesteros, F. J. Herranz, M. A. del Olmo and M. Santander,
“Quantum (2 + 1) kinematical algebras: a global approach”
J. Math. Phys. 27 (1994) 1283
Ballesteros:2014kaa
A. Ballesteros, F. J. Herranz, C. Meusburger and P. Naranjo,
“Twisted (2+1) κ-AdS Algebra, Drinfel'd Doubles and Non-Commutative Spacetimes,”
SIGMA 10 (2014), 052
[arXiv:1403.4773 [math-ph]].
jack3Dgravity
G. Rosati,
“κ–de Sitter and κ-Poincaré symmetries emerging from Chern-Simons (2+1)D gravity with a cosmological constant,”
Phys. Rev. D 96 (2017) no.6, 066027
[arXiv:1706.02868 [hep-th]].
Ballesteros:2017pdw
A. Ballesteros, G. Gubitosi, I. Gutiérrez-Sagredo and F. J. Herranz,
“Curved momentum spaces from quantum (anti–)de Sitter groups in ( 3+1 ) dimensions,”
Phys. Rev. D 97 (2018) no.10, 106024
[arXiv:1711.05050 [hep-th]].
|
http://arxiv.org/abs/2307.04478v1 | 20230710110110 | A closed form exact formulation of the spectral representation of a second-order symmetric tensor and of its derivatives | [
"Andrea Panteghini"
] | cs.CE | [
"cs.CE",
"cs.NA",
"math.NA"
] |
Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling
Giuseppe Desolda10000-0001-9894-2116 Andrea Esposito10000-0002-9536-3087 Florian Müller20000-0002-9621-6214
Sebastian Feger20000-0002-0287-0945
August 12, 2023
====================================================================================================================================================
The spectral decomposition of a symmetric, second-order tensor is widely adopted in many fields of Computational Mechanics. As an example, in elasto-plasticity under large strain and rotations, given the Cauchy deformation tensor, it is a fundamental step to compute the logarithmic strain tensor.
Recently, this approach has been also adopted in small-strain isotropic plasticity to reconstruct the stress tensor as a function of its eigenvalues, allowing the formulation of predictor-corrector return algorithms in the invariants space. These algorithms not only reduce the number of unknowns at the constitutive level, but also allow the correct handling of stress states in which the plastic normals are undefined, thus ensuring a better convergence with respect to the standard approach.
While the eigenvalues of a symmetric, second-order tensor can be simply computed as a function of the tensor invariants, the computation of its eigenbasis can be more difficult, especially when two or more eigenvalues are coincident.
Moreover, when a Newton-Rhapson algorithm is adopted to solve nonlinear problems in Computational Mechanics, also the tensorial derivatives of the eigenbasis, whose computation is still more complicate, are required to assemble the tangent matrix.
A simple and comprehensive method is presented, which can be adopted to compute a closed form representation of a second-order tensor, as well as their derivatives with respect to the tensor itself, allowing a simpler implementation of spectral decomposition of a tensor in Computational Mechanics applications.
§ INTRODUCTION
This paper presents important developments regarding the eigenvalues and eigenvectors of a symmetric second-order tensor and the determination of the associated basis required for its spectral representation.
The results here presented apply to situations involving isotropic scalar-valued functions and isotropic tensor-valued functions of a symmetric second-order tensor.
For instance, the finding of this article are useful for the integration of constitutive laws of isotropic materials and in finite deformations (e.g., to compute the the logarithmic strain tensor from the displacement gradient).
The numerical integration of isotropic elasto-plastic constitutive laws can be more efficiently carried out by formulating the return algorithms in terms of eigenvalues of the elastic strain tensor (e.g. Borja et al. <cit.> and de Souze Neto et al. <cit.>), or in the invariants elastic strain space <cit.>, <cit.>. Differently from the standard approach <cit.>, an invariant-based return algorithm allows the correct handling of stress states in which the plastic normals are undefined.
These two integration algorithms require the spectral representation of the stress, as well as the determination its derivatives to assemble the stiffness matrix. Unfortunately, their determination using the approach described in the literature is very cumbersome (see e.g., De Souza Neto et al. <cit.>, Borja et al. <cit.>), particularly when two or three eigenvalues coincide. This key aspect certainly makes these invariant-based integration algorithms, even if more and more efficient, less attractive with respect to standard return algorithms formulated in terms of full tensorial components.
About the applications in large strain theories, to avoid the complexity of the standard procedure, commercial codes (e.g. SIMULIA Abaqus <cit.>) often employ approximate formulations to numerically integrate the logarithmic strain in finite deformation analyses. Some Authors suggest, for specific isotropic functions, to resort to their numerical approximation based on series expansion (e.g. Ortiz et at. <cit.>, de Souza Neto <cit.>, Hudobivnik et al. <cit.>).
However, it should be noted that these series-based procedures, even if simpler and numerically efficient, can be hardly adopted when the isotropic functions are not known explicitly (i.e., for instance, in the case of the integration of the isotropic elastoplastic materials described above).
The writer has later discovered that Odgen <cit.> incidentally describes, in an exercise contained in his book, a very important result, which to the best of his knowledge, seems to have been missed by the vast majority of the research community. He suggests a very simple method for retrieving a closed-form expression for the basis of the spectral decomposition of a second-order tensor which does not require the computation of the originating eigenvectors.
This result has later been reported also by Miehe <cit.>, who however states that " the formulation above is restricted to the case of distinct eigenvalues of the tensor". Moreover the same Author <cit.> points out that such an approach requires the inversion of the second-order tensor, which severely restricts the applicability of the method. De Souza Neto et al. <cit.> describe a very cumbersome method to evaluate both the basis and their spin. They also state that "...a methodology similar to that adopted here was introduced by Miehe (1993, 1998a), where a particularly compact representation for the function derivative is used. However, the compact representation allows only the computation of the derivative at invertible arguments and cannot be used...".
In this paper it is mathematically shown that indeed the basis required for the spectral representation of a symmetric second-order tensor can be derived without the computationally expensive evaluation of the associated eigenvectors. It is also shown that this can also be directly derived from the secular (or characteristic) equation of the tensor, without any assumptions about the invertibility of the second-order tensor. Most importantly it is clarified how the result can be particularized to the case of two and three coinciding eigenvalues, hence removing the strong limitation of the approach described by Miehe <cit.>, <cit.> which de facto prevents the application of this extremely useful result. This paper also provides the tensor derivatives of the basis, i.e. its spin.
Moreover, it is presented a simple and generic approach to compute the spectral representation of isotropic tensor-valued functions, as well as their derivatives with respect to the tensor variable itself. The proposed procedures can be practically adopted in computational mechanics since all limitations of the procedures available in the literature have been removed (the approach of De Souza Neto et al. <cit.> does not have such limitations but is laborious to implement). Finally two applications are presented for isotropic elasto-plasticity and for the evaluation of the logarithmic strain tensor in finite deformations.
§ EIGENVALUES, EIGENVECTORS AND SPECTRAL REPRESENTATION OF A SYMMETRIC, SECOND-ORDER TENSOR
Given the symmetric, second-order tensor , its (ordered) eigenvalues λ_i and their corresponding eigenvectors n_i are obtained
by solving the eigenvalues-eigenvectors problem <cit.>:
( -λ I )n = 0 n^Tn=1
being I the second-order identity tensor.
The principal components λ_i can be obtained by solving the third-order scalar equation in λ, namely the secular equation:
λ^3 -I_1 λ^2 + I_2 λ - I_3 = 0
The coefficients
I_1 =
I_2 = 1/2(I_1^2 - :^T)
I_3 = ()
are the invariants of , since their values do not depend on the reference system in which is expressed. The three ordered solutions of Eq. (<ref>) are the eigenvalues of the problem described in Eq. (<ref>). As explained in <cit.>, they can be computed in closed form as:
λ_I=I_1/3+2/√(3)√(J_2)sin(θ+ 2/3π)
λ_II=I_1/3+2/√(3)√(J_2)sin(θ)
λ_III=I_1/3+2/√(3)√(J_2)sin(θ- 2/3π)
where
J_2 = 1/2:
J_3 = ()
are the invariants of the second-order, deviatoric symmetric tensor = - I_1/3 I, and the Lode's angle θ is defined as
θ = 1/3arcsin( - √(27)/2J_3/√(J_2^3))
where -π/6 ≤θ≤π/6.
It is well known that the second-order symmetric tensor T can be expressed as a function of its eigenvalues λ_i and the corresponding eigenvectors n_i by resorting to the spectral theorem[
Let consider that, unless otherwise specified, it is always intended
∑ f_i = ∑_i=I,II,III f_i
]:
T = ∑λ_i n_i ⊗n_i = λ_i N_i
where N_i is the eigenbasis of T related to λ_i.
§ CLOSED-FORM EXPRESSION FOR THE EIGENBASIS OF T
We will consider three cases, as a function of the multiplicity of the eigenvalues λ_i:
* λ_I> λ_II >λ_III
* λ_I> λ_II=λ_III or
λ_I= λ_II>λ_III
* λ_I= λ_II=λ_III
Let observe that the number of non coincident eigenvalues, i.e., the the eigenvalues multiplicity can be simply determined from the invariants of T. Hence,
case (i) occurs when J_2≠0 and θ±π/6, the case (ii) implies J_2≠0 and θ=±π/6, and finally the case (iii) requires that J_2=0 (while θ is undefined).
§.§.§ A general property of the eigenbasis N_i.
We will initially prove that it results:
∑N_i = I
Let consider that the i-th eigenvalue and eigenvector of will satisfy Eq. (<ref>), i.e.
n_i= λ_in_i
Since n_i is a unit vector, it results
n_i^T n_i =
: ( n_i ⊗n_i)= λ_i ( n_i^T n_i)= λ_i
one can compute the first invariant I_1 in the principal coordinate system as
I_1==:I=∑λ_i = : ∑N_i
From this equation it must result
:I = : ∑N_i
This conditions yields
∑N_i = I
§.§.§ Case (i): λ_I> λ_II >λ_III.
One will prove that the spectral theorem
=∑λ_i (n_i ⊗n_i ) = ∑λ_i N_i
can be written as
= ∑λ_i λ_i
i.e., we will prove that it simply results[It should be noted that, to the best of the Author's knowledge, this result appears for the first time, without any demonstration or explanation in Ogden's book <cit.>.
It has been used by Mihe <cit.>, <cit.>, but, as explained in the Introduction, due to the limitations of his approach, it seems it is not commonly adopted in Computational Mechanics.
]:
N_i= λ_i
By considering the symmetry of T, the derivatives of the invariants I_1, I_2 and I_3, defined by Eq. (<ref>), (<ref>) and (<ref>) with respect to are:
I_1=I
I_2=I_1 I-
I_3=I_3 ^-1 =
where denotes the adjugate matrix of .
By substituting the property (<ref>) and the spectral theorem (<ref>) into Eq. (<ref>) and (<ref>) respectively, one obtains:
I_1=∑N_i
I_2=I_1 I-∑λ_i N_i
Finally, by resorting to the spectral theorem (<ref>), one can write (<ref>) as
[
Let observe that, by multiplying Eq. (<ref>) by = I_3 ^-1 one obtains
I_3 ^-1n= λ I_3 ^-1n
which gives
n = I_3/λn
Hence, the eigenvectors n of and are coincident, whilst the i-th eigenvalue μ_i of associated to n_i can be computed from λ_i as:
μ_i= I_3/λ_i= (λ_jλ_k)_i j k
The spectral representation of is then:
= ∑( λ_jλ_kN_i )_i j k
]
I_3= ∑(λ_jλ_kN_i )_i j k
Let consider now that the value of I_1, I_2 and I_3 are independent with respect to the reference systems, hence one can compute them also in terms of principal components. It result:
I_1= λ_I+λ_II+λ_III
I_2= λ_Iλ_II+ λ_Iλ_III +λ_IIλ_III
I_3 = λ_Iλ_IIλ_III
The derivatives of the invariants I_1, I_2 and I_3 can also be computed by differentiating these last three expressions, observing that λ_i=λ_i(). It results:
I_1=λ_I+λ_II+λ_III =∑λ_i
I_2=I_1∑λ_i-∑λ_iλ_i= I_1 I -∑λ_iλ_i
I_3 = λ_IIλ_IIIλ_I
+
λ_Iλ_IIIλ_II
+
λ_Iλ_IIλ_III
=∑(λ_jλ_kλ_i)_i j k
One can now compute the eigenbasis N_i as a function of the derivatives of the eigenvalues λ_i with respect to by solving the linear system of equations obtained by equating Eq. (<ref>), (<ref>) and (<ref>) with Eq. (<ref>), (<ref>) and (<ref>) respectively.
One obtains
{ ∑N_i = ∑λ_i
∑λ_i N_i = ∑λ_i λ_i
∑(λ_jλ_kN_i )_i j k=∑(λ_jλ_kλ_i)_i j k.
which, under the assumption λ_I> λ_II >λ_III[
Let observe that the determinant of the matrix of the system (<ref>) reads:
[ 1 1 1; λ_I λ_II λ_III; λ_IIλ_III λ_Iλ_III λ_Iλ_II ]
= - (λ_I-λ_II)(λ_I-λ_III)(λ_II-λ_III)
It is always nonzero if λ_I> λ_II >λ_III.
] simply gives
N_i = λ_i
so that the spectral theorem (<ref>) can be re-written as:
= ∑λ_i (n_i ⊗n_i ) = ∑λ_i λ_i
§.§.§ Case (ii): λ_I> λ_II=λ_III or
λ_I= λ_II>λ_III.
If one or more eigenvalues are coincident of , then the linear system (<ref>) will not admit a unique solution.
Let λ̂ be the non-repeated eigenvalue of and N̂ the correspondent eigenbasis.
The first invariant I_1 is equal to:
I_1 = λ̂+ 2 λ_II
so that, it results:
λ_II= 1/2(I_1 - λ̂)
Eq. (<ref>) can be rewritten as:
N̂ + 2 N_II = I
hence, it results:
N_II = 1/2(I -N̂)
The spectral theorem can be rewritten as:
=λ̂N̂ + 1/2(I_1-λ̂) (I -N̂)
=
3/2(λ̂- I_1/3) N̂+ 1/2( I_1- λ̂) I
Eq. (<ref>) can be further simplified by computing the deviatoric part l̂ of λ̂ as l̂=λ̂-I_1/3. One obtains
= I_1/3I+ 3/2l̂(
N_j -1/3I)
This last equation clearly shows that, when two eigenvalues are coincident, the deviatoric part of N̂, defined as N̂^d= N̂ -I/3, is simply proportional to the deviatoric part of the tensor , i.e.
N̂^d = 1/λ̂-λ_IIt = ∓1/qt θ=±π/6
where q=√(3 J_2).
This result is a consequence of the multiplicity of the deviatoric principal components.
When two eigenvalues of T coincide, the two coincident deviatoric principal components result to be
minus half of the (only) independent one, since their sum must vanish.
Eq. (<ref>) results to be the sum of two independent terms: the volumetric and the deviatoric parts. The basis of the volumetric part is obviously proportional to the identity tensor I, whilst that of the deviatoric part can only be proportional to the tensor itself.
= I_1/3I+ 3/2l̂N̂^d
It should be noted that, as in Case (i), it is still possible to demonstrate that
N̂=λ̂
To prove this result, let compute the second invariant J_2 of the deviatoric tensor as a function of the principal component λ̂:
J_2= :/2
=(λ̂-I_1/3 )^2 + 2 (λ_II -I_1/3 )^2 /2
= 3 (λ̂-I_1/3 )^2 /4
By differentiating this expression with respect to , one obtains
J_2= = - I_1/3I=3/2(λ̂-I_1/3) (λ̂- 1/3I)
so that, solving for one obtains:
= 3/2(λ̂- I_1/3)λ̂+ 1/2( I_1- λ̂) I
By equating this last expression with Eq. (<ref>) and solving for N̂[This can be done under the condition λ̂ I_1/3 that, observing Eq. (<ref>) is equivalent to J_20]
one obtains:
N̂ = λ̂
§.§.§ Case (iii): λ_I= λ_II=λ_III.
Finally, let consider the case of three coincident eigenvalues λ=λ_I= λ_II=λ_III. The tensor is purely volumetric in any reference system. By observing that it results l_i = 0 ∀ i and I_1=3λ, Eq. (<ref>) simply becomes
= λI
From Eq. (<ref>) it results
N_I=N_II=N_III=1/3I
§ COMPUTATION THE EIGENBASIS DIRECTLY FROM THE SECULAR EQUATION
Since the three eigenbasis are equal to the derivatives of its conjugate principal components with respect to the tensor , one can determine them by simply differentiating Eqs. (<ref>) with respect to . Using the chain rule, one obtains:
N_i = λ_i
= 1/3I+ √(3)/3( sinβ_i/√(J_2)J_2+2 J_2 cosβ_i θ)
where β_I= θ+2/3 π, β_II= θ, β_III= θ-2/3 π, and
[
It should be noted that Eq. (<ref>) requires the computation of ^-1. An expression more suitable for the implementation is
θ=-1/cos 3 θ(√(3)/2 √(J_2^3)J_3+ √(3)/6 √(J_2)I + sin 3θ/2 J_2)
where
J_3=
[ s_yy s_zz-s_yz^2 s_xz s_yz- s_xy s_zz s_xy s_yz - s_xz s_yy; s_xz s_yz- s_xy s_zz s_xx s_zz -s_xz^2 s_xy s_xz - s_yz s_xx; s_xy s_yz - s_xz s_yy s_xy s_xz - s_yz s_xx s_xx s_yy - s_xy^2 ]
that is undefined only for J_2=0 or θ=±π/6
]
J_2=
θ=1/cos 3 θ(sin 3θ/3^-1 - √(3)/6 √(J_2)I - sin 3θ/2 J_2)
The computation of the spin of the eigenbasis, i.e. N_i
is even more tiring.
A more elegant and simpler approach can be obtained by working directly on the secular equation (<ref>). Each of the eigenvalues λ_i will satisfy Eq. (<ref>), i.e.
f()= λ_i^3 - I_1 λ_i^2 + I_2 λ_i- I_3 =0
hence, it must result
f()=[(3 λ_i^2 - 2 I_1 λ_i + I_2 ) λ_i- Iλ_i^2 .
. + (I_1 I - )λ_i + I_3 ^-1]: =0 ∀
This imply the condition:
(3 λ_i^2 - 2 I_1 λ_i + I_2 ) λ_i- Iλ_i^2 + (I_1 I - )λ_i
+ I_3 ^-1=0
The eigenbasis N_i can be obtained by simply solving this last equation of λ_i. By observing that J_2=1/3 I_1^2-I_2, after some simple algebraic manipulation, one obtains[
A very compact way to write this derivative is I_3 ^-1. However, it should be noted that it is not completely correct from a formal point of view, since it is undefined when I_3=0. The invariant I_3, being defined as , is simply the adjugate matrix of , that is always defined. In simpler words, being I_3=
a third degree polynomial in _ij, its derivative with respect to is always defined. It results:
I_3=
=[ _yy_zz-_yz^2 _xz_yz- _xy_zz _xy_yz - _xz_yy; _xz_yz- _xy_zz _xx_zz -_xz^2 _xy_xz - _yz_xx; _xy_yz - _xz_yy _xy_xz - _yz_xx _xx_yy - _xy^2 ]
Eq. (<ref>) becomes
N_i = λ_i = λ_i [(λ_i-I_1 ) I+ ]+I_3/J_2 (4 sin^2 β_i -1)
]:
N_i = λ_i = λ_i [(λ_i-I_1 ) I+ ]+I_3^-1/J_2 (4 sin^2 β_i -1)
The spin of the eigenbasis can be obtained by differentiating Eq. (<ref>) by the tensor . One obtains
N_i=
^2λ_i/⊗
=1/J_2 (4 sin^2 β_i -1)[ - 4 √(3 J_2)sinβ_i (N_i⊗N_i) .
.
+
(2 λ_i -I_1 ) (N_i⊗I
+ I⊗N_i).
.+ (N_i⊗
+ ⊗N_i)
+
λ_i (I -I⊗I)
+ ^2 I_3/⊗]
where I is the fourth-order identity tensor and
^2 I_3/⊗ = δ_jk_il+ _jkδ_il
being δ_ij the Kroneker delta operator.
Let note that, even in the case of two coincident λ_i, the spin of the basis associated to the non-repeated eigenvalue λ̂ can still be computed using Eq. (<ref>). It is the only spin required to compute the derivative of Eq. (<ref>).
However, by exploiting the proportionality between the deviatoric part of the tensor and the basis itself, it can be simpler obtained by means of Eq. (<ref>).
As explained in the previous section, when all the eigenvalues coincide, the three eigenbasis N_i are simply equal to I/3. Their spin is not defined, but, as explained in the next section, it is still possible to evaluate the derivative of the spectral representation of the tensor when its invariants are isotropic functions.
§ ISOTROPIC FUNCTIONS
In many mechanical applications it is a priori known that two second-order, symmetric tensors S and T share the same principal directions. Under these conditions, the two tensors are called co-axial.
These applications usually involve isotropic tensor functions, i.e., the invariants of the tensor T are function of the those of the tensor S.
In these applications, once the principal components η_i of the tensor S are computed as a function of those of T, say λ_i it is finally required to compute the Cartesian components of S.
Let S be a symmetric, second-order tensor, co-axial with .
Let assume that the generic eigenvalues η_i( λ_I, λ_II, λ_III) of S can be computed as a function of the eigenvalues λ_i of .
Since S and are co-axial, they will share the same eigenbasis N_i and it results
S=∑η_i ( λ_I, λ_II, λ_III)N_i
Since it results N_i ⊗N_j=0 for i j, the derivative of this expression with respect to the tensor T will be
S = ∑η_iλ_iN_i ⊗N_i + η_i
N_i
Let consider the case in which two eigenvalues λ_i of coincide.
As explained in the section above, under this condition it results that the deviatoric part of S, say s, results to be proportional to the deviatoric part of T, say t. Hence, one can compute S as
S= I_1S/3I+ q_S/q_Tt
where I_1T=T is the first invariant of T, q_T=√(3/2t:t), I_1S=S=I_1S(I_1T,q_T) and q_S=√(3/2s:s)=q_S(I_1T,q_T).
Let now compute S. Since s and t are simply proportional, it must result
θ_S=θ_T
and then
θ_Sθ_T=1
Moreover, considering that
Eq. (<ref>) gives:
q_S(θ_S)= √(-27/2J_3S/sin (3 θ_S))
it results
q_Sθ_T=qθ_Sθ_Sθ_T
= 3√(4)/2J_3Scos (3 θ_S) √(sin^2 (3 θ_S))/√(J_3S^2)sin^2 (3 θ_S)= 0
θ_S=θ_T=±π/6
Analogously
I_1Sθ_T=I_1Sθ_Sθ_Sθ_T=0 ∀θ_T
Hence, observing that from Eq. (<ref>) it results that
q_TT = 3/2 q_Tt = ∓3/2N̂^d
θ_T=±π/6
by differentiating Eq. (<ref>) with respect to T one obtains:
S= 1/3I_1SI_1TI⊗I∓1/2I_1Sq_TI⊗N̂^d
+ q_S/q_T(ℐ - 1/3I⊗I)
+ 3/2(q_Sq_T
- q_S/q_T) N̂^d ⊗N̂^d
∓q_SI_1TN̂^d ⊗I θ_T = ±π/6
where ℐ is the fourth-order identity tensor.
Finally, when all the eigenvalues coincide, Eq. (<ref>) reduces to:
S= I_1S/3I
whilst it derivative can be computed by particularizing Eq. (<ref>).
By observing that when t→0, N_i →I/3, so that its deviatoric part N̂^d→0.
Observing that q_S → 0 when q_T → 0, using a Tayor expansion for q_T → 0, it will result:
q_S(I_1T, 0)≈q_Sq_T q_T
so that q_S/q_T →q_Sq_T, and finally:
S= 1/3I_1SI_1TI⊗I+ q_Sq_T(ℐ - 1/3I⊗I)
§ APPLICATIONS
§.§ Isotropic elastoplastic materials under small-strains and displacements
Let consider a generic elastoplastic isotropic material, in which the principal directions of
the elastic strains and of the stress coincides.
Let be s the deviatoric part of the Cauchy stress tensor σ, and
p=1/3σ
q=√(3/2s:s)
θ_σ=1/3arcsin( -27/2s/q^3)
the stress invariants, i.e.
the hydrostatic pressure, the equivalent von Mises stress, and the stress Lode's angle respectively.
In a general backward Euler integration scheme, let be ^* and Δ^p the elastic strain predictor and the plastic strain increment respectively.
The plastic strain increment can be computed as a function of an isotropic plastic potential g(p,q,θ_σ) as
Δ^p= g(p,q,θ_σ)σΔγ
where Δγ is the plastic multiplier. Since g(p,q,θ_σ) is an isotropic function of σ, its derivative respect to σ will be co-axial with the stress <cit.> <cit.>.
Then, since the elastic strain ^e is co-axial with σ for the assumption of isotropy, it results that also
^*=^e+Δ^p
is co-axial with σ. For these reasons, the principal directions of stress are a priori known, being coincident with those of the predictor ^*.
Let e^* the deviatoric part of the elastic predictor ^*, and
_v^*=^*
_q^*= √(2/3e^*:e^* )
θ^*_=1/3arcsin( - 4 e^*/_q^*3)
the its invariants, i.e. the volumetric strain predictor, the equivalent von Mises strain predictor, and the strain predictor Lode's angle.
In general, if a standard return algorithm in the full tensorial space is employed, numerical problems and convergence difficulties can arise when two or more eigenvalues coincide.
Instead, p, q, θ_σ can be more easily computed formulating a return algorithm in the invariants strain space <cit.>. Once p, q and θ_σ have been obtained as a function of the strain invariants predictor, it is necessary to compute the stress tensor σ.
If _q^* 0 and |θ^*_| π/6, one can compute the stress tensor from its invariants and from the eigenbasis N_i^* of the elastic strain predictor ^* by resorting to the
spectral theorem. It results
σ= ∑[p(_v^*,_q^*,θ^*_)
+ 2/3 q(_v^*,_q^*,θ^*_) sinβ_i (_v^*,_q^*,θ^*_)] N_i^*
where
β_I = θ_σ (_v^*,_q^*,θ^*_) + 2/3π
β_II = θ_σ(_v^*,_q^*,θ^*_)
β_III = θ_σ (_v^*,_q^*,θ^*_) - 2/3π
and N_i^* is computed from Eq. (<ref>) as a function of the invariants of ^* and its principal components.
The consistent jacobian matrix[It should be noted that this general approach has been recently adopted by the Author in <cit.>, while in his older work <cit.>, in order to avoid the computation of the spin of the eigenbasis, the spectral representation of the stress was computed as a function of the eigenvectors of the strain predictor, while jacobian matrix was obtained by means of a "simplified" procedure based on the inversion of a 6x6 matrix. Unfortunately, this procedure is model-specific and requires the smoothness in the deviatoric plane of the yield function and of the plastic potential.] can be computed from Eq. (<ref>) as
σ^*=∑[p+ 2/3 q sinβ_i ]
N_i^*^*
+
N_i^*
⊗{[
p_v^*
+2/3(q_v^*sinβ_i +
q θ_σ_v^*cosβ_i
)
]I.
.
+2/3_q^*[p_q^*
+2/3(q_q^*sinβ_i +
q θ_σ_q^*cosβ_i
) ]e^*
.
.
+[pθ^*_
+2/3(qθ^*_sinβ_i +
q θ_σθ^*_cosβ_i
) ]θ^*_^*}
where the eigenbasis spin N_i^*^* and θ^*_^* are computed as a function of the invariants and principal components of ^* from Eqs. (<ref>) and (<ref>) respectively.
If _q^* is not nil, at least two eigenvalues of the strain predictor ^* are distinct.
Specifically, if θ^*_=±π/6 two eigenvalues of ^* will be coincident. In this case, from Eq. (<ref>) it will result that e^* will be proportional to the deviatoric part of the eigenbasis associated to its non-repeated eigenvalue. Hence, from Eq. (<ref>) one simply obtains:
σ=p (_v^*,_q^*,θ^*_) I+2 /3 _q^* q (_v^*,_q^*,θ^*_) e^*
Also the eigenbasis of the deviatoric part of the plastic strain increment Δe^p and of the elastic strains will coincide with those of e^*, and then it will result:
Δ^p=^p_v/3I+_q^p/_q^*e^*
^e=^e_v/3+_q^e/_q^*e^*
The jacobian matrix can be obtained simplifying Eq. (<ref>) using Eq. (<ref>). It yields:
σ^*=
p_v^*I⊗I+ 2/3_q^*[p_q^*(I⊗e^*)
+q_v^*(e^* ⊗I)
.
.
+ 2/3 _q^*(q_q^* -q/_q^*) ( e^* ⊗e^*)
+ q ( ℐ-1/3I⊗I)
]
If _q^* is nil, the strain predictor ^* will be a volumetric tensor, since its spectral decomposition has the same structure of Eq. (<ref>). Moreover, _q^*=0 implies e^* = 0.
Since the material is isotropic, the eigenbasis of σ and ^* the same, resulting to be coincident with the second-order identity tensor I. Then, from Eq. (<ref>) it will result
σ= p (_v^*) I
The derivative of the eigenbasis is undefined. However, as explained in the section above, the Jacobian Matrix can be obtained as a limit case of Eq. (<ref>), i.e., using Eq. (<ref>).
Let observe that, under purely volumetric conditions, the convexity of the elastic potential requires <cit.>:
p_q^* =q_v^*=0
It results:
σ^*=
p_v^*I⊗I+ 2/3q_q^*( ℐ-1/3I⊗I)
§.§ Computation of logarithmic strain tensor from displacement gradient
In the framework of large strains and rotations, let p denotes the reference coordinate system.
Indicating with u( p) the vector function describing the displacement of each material point, it results that its final position will be (i.g. <cit.>)
x=p+ u( p)
The deformation gradient F is defined as
F= ∇_p x = I+∇_p u( p)
By applying the polar decomposition (i.g. <cit.>) to the deformation gradient F, one obtains:
F = VR
where the orthogonal tensor R describes the local rotation, whilst the symmetric positive definite tensor
V is the left stretch tensor, where
V^2 = B = FF^T
B being the left Cauchy-Green tensor.
The logarithmic strain tensor can be computed as:
=lnV=1/2lnB
i.e.,
= 1/2∑ln( λ^B_i) N^B_i
where λ^B_i and N^B_i are the i-th principal component and eigenbasis of the tensor B respectively.
The invariants of B, I_1B, J_2B
and θ_B can be computed using Eqs. (<ref>), (<ref>) and (<ref>), whilst the principal components λ^B_i can be obtained using Eqs. (<ref>).
If λ^B_i are distinct, i.e., if J_2B 0 and |θ_B| π/6, all the eigenbasis N^B_i of the left Cauchy-Green tensor can be computed as a function of its invariants and its principal components using Eq. (<ref>). The logarithmic strain tensor can be computed using Eq. (<ref>).
The jacobian matrix B can be computed by using Eq. (<ref>):
B=1/2∑[ ln( λ^B_i)
N^B_iB
+ 1/λ^B_iN^B_i⊗N^B_i]
where N^B_iB can be computed using Eq. (<ref>).
When two principal components of B are coincident, i.e. if J_2B 0 and |θ_B| = π/6, one can compute by exploiting the proportionality between the deviatoric part b of B and e.
Let start by computing the invariants q_=√(3 J_2) and I_1 of as a function of q_B= √(3 J_2B) and I_1B. Let observe that it results
q_B = ±( λ^B_II-λ̂^B) θ_B = ±π/6
By solving this expression for λ^B_II one obtatins
λ^B_II= λ̂^B ± q_B θ_B = ±π/6
Substituting this result into the definition of I_1B=λ̂^B + 2λ^B_II and solving for λ̂^B gives
λ̂^B= I_1B∓ 2 q_B/3 θ_B = ±π/6
By substituting this expression into Eq. (<ref>) one obtains
λ^B_II= I_1B± q_B/3 θ_B = ±π/6
One can now compute the invariants of as a function of those of B. It results:
I_1 =λ̂^+ 2 λ^_II = 1/2[
ln( I_1B∓ 2 q_B/3)+2ln( I_1B± q_B/3)
],
q_= ±( λ^_II-λ̂^) = ±1/2( lnλ^B_II - lnλ̂^B ) = ±1/2ln( I_1B± q_B/I_1B∓ 2 q_B)
θ_ = θ_B = ±π/6
The logarithmic strain tensor can be finally computed using Eq.(<ref>). It results:
= I_1/3I + q_/q_B b
Its derivative can be obtained by applying Eq. (<ref>). It results:
B= 1/3I_1I_1BI⊗I∓1/2I_1q_BI⊗N̂^d_B
+ q_/q_B(ℐ - 1/3I⊗I)
+ 3/2(q_q_B - q_/q_B) N̂^d_B ⊗N̂^d_B
∓q_I_1BN̂^d_B ⊗I θ_ = θ_B = ±π/6
where, from Eq. (<ref>):
N̂^d_B= ∓1/q_Bb θ_B = ±π/6
and, by computing the derivatives of Eq. (<ref>):
I_1I_1B=3(± q_B - I_1B)/(I_1B± q_B)(± 4 q_B -2 I_1B)
I_1q_B=3 q_B/(± 2 q_B-I_1B)(I_1B± q_B)
q_I_1B = 3 q_B/(I_1B± q_B)(± 4 q_B -2 I_1B)
q_q_B = - 3 I_1B/(I_1B± q_B)(± q_B -2 I_1B))
θ_ = θ_B = ±π/6
Finally, if J_2B = 0, then the logarithmic strain will be purely volumetric, and it will result λ_i^B = λ^B.
Eqs. (<ref>) become:
I_1 = 3/2ln( I_1B/3) = 3/2lnλ^B
q_= 0
By applying Eq. (<ref>) it will result:
=
1/2lnλ^B I
To compute the derivative of with respect to B, let start substituting Eqs. (<ref>) into Eqs. (<ref>). It results:
I_1I_1B=3/2 I_1B = 1/2 λ^B
I_1q_B=q_I_1B = 0
q_q_B = 3/2 I_1B = 1/2 λ^B
By substituting these expressions into Eq. (<ref>) one obtains:
B= 1/2λ^Bℐ
§ CONCLUSIONS
The spectral representation of a symmetric, second-order tensor is an important tool in
many applications of computational mechanics.
While the computation of the eigenvalues of a symmetric, second-order tensor is a relative simple task, obtaining a closed-form expression for the eigenbasis is more complicate, especially when some eigenvalue is repeated. Moreover, in many computational mechanics applications, also the derivative of the spectral representation is required. The exact closed-form expressions available in the literature for both the eigenbasis and their derivative are quite hard to implement (see, e.g., <cit.>). For this reason, many Authors suggest to resort to series expansions, that however are available only specific functions (see, e.g., <cit.>, <cit.>) or
require automatic differentiation techniques for a generic function <cit.>,
These approximate techniques are hard to apply
when the isotropic tensor-valued functions are not known explicitly, such as, for instance,
in the numerical integration of elastoplastic isotropic constitutive laws formulated in invariants space (<cit.> <cit.> <cit.>).
In this paper, starting from a incidental result reported by Ogden <cit.> working only in the case of not coincident eigenvalues, an exact, simple and clear approach has been developed. Differently from that described by Miehe <cit.>, <cit.> no particular requirements about the invertibility of the tensor, or its eigenvalues multiplicity are necessary.
Two applications have been presented: (i) the computation of stress tensor and of the stiffness matrix in the case of the numerical integration of an elastoplastic isotropic material in the invariant stress space, and (ii) the calculation of the logarithmic strain tensor from the displacement gradient, as well as its derivative with respect to the left Cauchy-Green tensor.
plain
|
http://arxiv.org/abs/2307.04763v1 | 20230710175853 | On the total CR twist of transversal curves in the 3-sphere | [
"Emilio Musso",
"Lorenzo Nicolodi"
] | math.DG | [
"math.DG",
"53C50, 53C42, 53A10"
] |
=1
100
thmTheorem[section]
thmxTheorem
lemma4mm1mm .
lemmaALemma A
lemmaBLemma B
lemmaCLemma C
prop[thm]Proposition
lemma[thm]Lemma
cor[thm]Corollary
claim[thm]Claim
scolium[thm]Scholium
definition
defn[thm]Definition
ex[thm]Example
remark
remark[thm]Remark
notationNotation
assumption[thm]Assumption
equationsection
psmallmatrix
([ )
ℂ
ℝ
ℂℙ
ℤ
𝕍
ℍ̋
Ł𝕃
𝔼
𝕊
𝒮
ḍ
𝔤
ø0
𝔟̱
𝔪
𝔭
𝔊
𝔰𝔩
𝔯
𝔱
𝔨̨
SO
SU
SL
GL
ØO^↑_+
M^↑_+
ℐ
ℂP
𝔄
ℚ
𝒟
𝒫
ℰ
ℱ
ℒ
ℋ
T^2
𝒦
𝔗
𝔅
𝒢
𝒮
F
𝒩
𝒩_f
ÅA
𝒜
p
q
2𝒫^2
G
1G^+_1
1N^+_1
]On the total CR twist of
transversal curves in the 3-sphere
(E. Musso) Dipartimento di Scienze Matematiche, Politecnico di Torino,
Corso Duca degli Abruzzi 24, I-10129 Torino, Italy
[email protected]
(L. Nicolodi) Dipartimento di Scienze Matematiche, Fisiche e Informatiche, Università di Parma,
Parco Area delle Scienze 53/A, I-43124 Parma, Italy
[email protected]
Authors partially supported by
PRIN 2017 “Real and Complex Manifolds: Topology, Geometry and holomorphic dynamics”
(protocollo 2017JZ2SW5-004);
and by the GNSAGA of INdAM.
The present research was also partially
supported by MIUR grant “Dipartimenti di Eccellenza” 2018-2022, CUP: E11G18000350001, DISMA, Politecnico
di Torino
[2010]53C50; 53C42; 53A10
Dedicated to Peter Olver on the occasion of his 70th birthday
We investigate the total CR twist functional on transversal curves in the standard CR 3-sphere
S^3 ⊂ℂ^2.
The question of the
integration by quadratures of the critical curves
and the problem
of existence and properties of closed critical curves are addressed.
A procedure for the explicit integration
of general critical curves is provided and a characterization of closed curves
within a specific class of general critical curves
is given.
Experimental evidence of the existence of infinite countably many closed critical curves
is provided.
[
Lorenzo Nicolodi
Version of June 20, 2023
============================
§ INTRODUCTION
The present paper finds its inspiration and theoretical framework in
the subjects of moving frames, differential invariants, and invariant variational problems,
three of the many research topics to which Peter Olver has made
lasting contributions.
Among the many publications of
Peter Olver dedicated to these subjects, we like to mention
<cit.>
as the ones that most influenced
our research activity.
0.1cm
More specifically, in this paper we further develop some of the themes
considered in <cit.>
concerning the Cauchy-Riemann (CR) geometry of transversal and Legendrian curves in the 3-sphere.
In three dimensions, a CR structure on a manifold is defined by an oriented contact distribution
equipped with a complex structure.
While the automorphism group of a contact manifold is infinite dimensional,
that of a CR threefold is finite dimensional and of dimension less or equal than eight <cit.>.
The maximally symmetric CR threefold is the 3-sphere S^3, realized as a real hyperquadric of ℂℙ^2
acted upon transitively by the Lie group G ≅SU(2,1).
This homogeneous model allows the application of
differential-geometric techniques to the study of transversal and Legendrian curves in S^3.
Since the seminal work of Bennequin <cit.>, the study of the topological properties of transversal and Legendrian
knots in 3-dimensional contact manifolds has been an important area of research
(see, for instance, <cit.> and the literature therein).
Another reason of interest for 3-dimensional contact geometry
comes from
its applications to neuroscience.
In fact, as shown by Hoffman <cit.>, the visual cortex can be modeled as a bundle equipped with a contact structure.
For more details, the interested reader is referred to the monograph
<cit.>.
Recently, the CR geometry of Legendrian and transversal curves in S^3 has also found
interesting applications in the framework of integrable system <cit.>.
0.1cm
Let us begin by recalling some results from the CR geometry
of transversal curves in S^3.
According to <cit.>, away from CR inflection points,
a curve transversal to the contact distribution of S^3
can be parametrized by a natural pseudoconformal parameter s and in this parametrization it is
uniquely determined, up to CR automorphisms,
by two local CR invariants: the CR bending κ and the CR twist τ.
This was achieved by developing the method of moving frames and by constructing a canonical
frame field along generic[I.e., with no CR inflection points.] transversal curves.
Moreover, for closed transversal curves, we defined three discrete global invariants, namely,
the wave number, the CR spin, and the CR turning number.
Next, we investigated the total strain functional, defined
by integrating the strain element ds.
We proved that the corresponding critical curves have
constant bending and twist, and hence
arise as orbits of 1-parameter groups of CR automorphisms. Finally,
closed critical curves
are shown to be transversal positive torus knots with maximal Bennequin number.
0.1cm
In the present paper, we consider
the CR invariant variational problem for generic transversal curves in S^3 defined by the
total CR twist functional,
𝒲(γ) = ∫_γτ ds.
Our purpose is to address both the question of the explicit integration of critical curves and
the problem of existence and properties of closed critical curves of 𝒲.
0.1cm
We now give a brief outline of the content and results of this paper.
In Section <ref>, we shortly describe the standard CR
structure of the 3-sphere S^3,
viewed as a homogeneous space of the
group G, and collect some preliminary material. We then recall the basic facts
about the CR geometry of transversal curves in S^3 as
developed in <cit.> (see the description above).
Moreover, besides the already mentioned discrete global invariants for a closed transversal curve,
we introduce a fourth global invariant, the trace of the curve with respect to a spacelike line.
0.1cm
In Section <ref>,
we apply the method of moving frames
and the Griffiths approach to the calculus of variations <cit.> to compute the Euler–Lagrange equations
of the total CR twist functional.
We construct the momentum space of the corresponding variational problem
and find a Lax pair formulation for the Euler–Lagrange equations satisfied by the critical curves.
This is the content of Theorem <ref>, the first main result of the paper, whose proof occupies
the whole Section <ref>.
As a consequence of Theorem <ref>,
to each critical curve we associate a momentum operator, which is
a fixed element of
the G-module 𝔥
of traceless selfadjoint endomorphisms of ℂ^2,1.
From the conservation of the momentum along a critical curve,
we derive two conservation laws, involving two
real parameters c_1 and c_2.
The pair 𝐜=(c_1,c_2) is referred to as the modulus of the critical curve.
0.1cm
In Section <ref>,
we introduce the phase type of the modulus of a critical curve. We then define the phase curve
of a
given modulus and the associated notion of signature of a critical curve with that given modulus.
For a generic modulus 𝐜, the phase type of 𝐜 refers to the properties of the roots
of the quintic polynomial in principal form given by
P_𝐜(x)=x^5+3/2c_2x^2+27c_1x-27/2c_1^2 .
The phase curve of
the modulus 𝐜 is the real algebraic curve
defined by the equation y^2= P_𝐜(x).
The signature of a critical curve γ with modulus 𝐜 and nonconstant twist
provides a parametrization of the connected components of the
phase curve of 𝐜 by the twist of γ. Importantly, the periodicity of the twist of γ
amounts to the compactness of the image of the signature of γ.
This will play a role in Sections <ref>
and <ref>, where the closedness question for critical curves is addressed.
Using the Klein formulae for the icosahedral solutions of the quintic <cit.>,
the roots of P_𝐜
can be evaluated in terms of hypergeometric functions. As a byproduct, we show that the twist
and the bending of a
critical curve can be obtained by inverting incomplete hyperelliptic integrals of the first kind.
We further specialize our analysis by introducing the orbit type of the modulus 𝐜 of a
critical curve γ. The orbit type of 𝐜 refers to the spectral properties of the momentum associated to γ.
Depending on the phase type, the number of connected components of the phase curves,
and the orbit type,
the critical curves are then divided into twelve classes.
The critical curves of only three of these classes have periodic twist.
0.1cm
In Section <ref>,
we show that a general critical curve
(cf. Definition <ref>)
can be integrated by quadratures using the momentum of the curve. This is the content
of Theorem <ref>, the second main result of the paper.
Theorem <ref> is then specialized
to one of the twelve classes of critical curves, the class characterized by the compactness of the
connected component of the phase
curve and by the existence of three distinct real
eigenvalues of the momentum.
Theorem <ref>, the third main result,
shows that the critical curves of this specific class can be explicitly written by inverting hyperelliptic integrals
of the first and third kind.
We then examine the closure conditions and prove that a critical curve in this class is closed if and only if certain
complete hyperelliptic integrals depending on the modulus of the curve are rational.
Finally, the relations between these rational numbers and the global CR invariants mentioned above
are discussed.
0.1cm
In the last section, Section <ref>, we develop
convincing heuristic and numerical arguments to support the claim that there exist
infinite countably many distinct congruence classes of closed critical curves.
These curves are uniquely determined by the four discrete geometric invariants: the wave number,
the CR spin, the CR turning number, and the trace
with respect to the spacelike λ_1-eigenspace of the momentum.
Using numerical tools, we construct and illustrate explicit examples of approximately closed critical curves.
§ PRELIMINARIES
§.§ The standard CR structure on the 3-sphere
Let ^2,1 denote ^3 with the indefinite Hermitian scalar product of signature (2,1) given by
⟨𝐳,𝐰⟩
= ^t𝐳 𝐡 𝐰,
𝐡= (h_ij)=
[ 0 0 i; 0 1 0; -i 0 0 ].
Following common terminology in pseudo-Riemannian geometry, a nonzero vector z∈^2,1
is spacelike, timelike or lightlike, depending on whether ⟨ z, z⟩
is positive, negative or zero.
By 𝒩 we denote the nullcone, i.e., the set of all lightlike vectors.
0.1cm
Let 𝒮= ℙ(𝒩) be the real hypersurface in ^2 defined by
𝒮 ={[ z] ∈^2 |⟨𝐳,𝐳⟩
= i(z_1z_3-z_3z_1)+z_2z_2=0}.
The restriction of the affine chart
s:^2 ∋ (z_1,z_2) ⟼[^t(1+z_1/2,iz_2/√(2),i1-z_1/2)]∈𝒮⊂^2
to the unit sphere S^3 of ^2 defines a smooth diffeomorphism
between S^3 and 𝒮.
For each p=[ z]∈𝒮, the differential (1,0)-form
ζ̃|_p =
- i ⟨z,d z⟩/ z ^t z|_p ∈Ω^1,0(^2)|_p
is well defined. In addition, the null space of the imaginary part of ζ̃|_p is
T(𝒮)|_p, namely the tangent space of 𝒮 at p.
Thus, the restriction of ζ̃ to T(𝒮) is a real-valued 1-form ζ∈Ω^1(𝒮).
Since the pullback of ζ by the diffeomorphism s: S^3→𝒮 is the standard contact
form i z· d z|_S^3 of S^3, then ζ is a contact
form whose contact distribution 𝒟 is, by construction, a complex subbundle
of T(^2)|_𝒮. Therefore, 𝒟 inherits
from T(^2)|_𝒮 a complex structure J. This defines a CR structure on 𝒮.
Let 𝐞_1, 𝐞_2, 𝐞_3 denote the standard basis of ℂ^3.
Consider P_0=[𝐞_1]
∈𝒮 and P_∞=[𝐞_3]
∈𝒮 as the origin
and the point at infinity of . Then, := ∖{P_∞} can be identified with
Euclidean 3-space with its standard contact structure
dz - ydx + xdy
by means of the Heisenberg projection[This map is the analogue of the stereographic projection in Möbius (conformal) geometry.]
π_H: ∋ [ z]⟼^t(Re(z_2/z_1),Im(z_2/z_1),Re(z_3/z_1))∈^3.
The inverse of the Heisenberg projection is the Heisenberg chart
j_H : ^3 ∋^t(x, y, z) ⟼[^t(1,x+iy,z+i/2(x^2+y^2))]∈.
The Heisenberg chart can be lifted to a map
whose image is a 3-dimensional closed subgroup ^̋3 of G,
which is isomorphic to the 3-dimensional Heisenberg group <cit.>.
0.1cm
Let G be the special pseudo-unitary group of (<ref>),
i.e., the 8-dimensional Lie group
of unimodular complex 3 × 3 matrices
preserving (<ref>),
G = { A ∈SL(3,ℂ) |^tA̅𝐡 A = 𝐡}≅SU(2,1),
and let 𝔤 denote the Lie algebra of G,
𝔤 = { X ∈𝔰𝔩(3,ℂ) |^tX̅𝐡 + 𝐡 X = 0}.
The Maurer–Cartan form of the group G takes the form
ϑ = A^-1dA =[ α_1^1 +i β_1^1 -i α_3^2 - β_3^2 α_3^1; α_1^2 +i β_1^2 -2i β_1^1 α_3^2 +iβ_3^2; α_1^3 i α_1^2 + β_1^2 -α_1^1 +i β_1^1; ],
where the 1-forms
(α_1^1 , β_1^1 , α_1^2 , β_1^2 ,α_1^3 , α_3^2 , β_3^2 , α_3^1)
form a basis of the dual Lie algebra 𝔤^*.
The center of G is Z = {ϖ I_3|ϖ∈ℂ, ϖ^3 =1}≅ℤ_3, where
I_3 denotes the 3× 3 identity matrix. Let [G] denote the quotient Lie
group G/Z and for A∈ G let [A] denote its equivalence class in [G].
Thus [A] = [B] if and only if B = ϖ A, for some cube root
of unity ϖ.
For any A∈ G, the column vectors (A_1, A_2, A_3) of A form a basis of ^2,1
satisfying ⟨ A_i, A_j ⟩ = h_ij and det(A_1, A_2, A_3) =1. Such a basis is
referred to as a
lightcone basis.
On the other hand, a basis (𝐮_1, 𝐮_2, 𝐮_3) of ^2,1,
such that det(𝐮_1, 𝐮_2, 𝐮_3) =1 and
⟨𝐮_i, 𝐮_j ⟩ = δ_ijϵ_j, where
ϵ_1 = -1, ϵ_2 = ϵ_3 = 1,
is referred to as a unimodular pseudo-unitary basis.
0.1cm
The group G acts transitively and almost effectively on the left of
by
A[ z] = [A z], ∀ A∈G, ∀ [ z] ∈.
This action descends to an effective action of [G]= G/Z on
.
It is a classical result of E. Cartan <cit.> that [G]
is the group of CR automorphisms
of .
If we choose [𝐞_1]=[ ^t(1,0,0)]∈𝒮 as an origin of 𝒮,
the natural projection
π_:G∋ A ↦ A[𝐞_1] =[A_1]∈
makes G into a (trivial) principal fiber bundle with structure group
G_0={ A∈G| A[𝐞_1]=[𝐞_1] }.
The elements of G_0 consist of all 3×3 unimodular matrices of the form
X(ρ,θ,v,r)=
[ ρ e^iθ -iρ e^-iθv̅ e^iθ(r-i/2ρ |v|^2); 0 e^-2iθ v; 0 0 ρ^-1e^iθ,; ],
where v∈, r ∈, 0≤θ <2π, and ρ>0.
The left-invariant 1-forms α_1^2, β_1^2, α_1^3 are linearly independent and generate
the semi-basic 1-forms for the projection π_𝒮: G →𝒮. So, if s:U⊆→G is
a local cross section of π_,
then (s^*α_1^3 , s^*α_1^2, s^* β_1^2 )
defines a coframe on U and s^*α_1^3 is a positive contact form.
§.§ Transversal curves
Let γ : J → be a smooth immersed curve. We say that γ is transversal (to the contact
distribution 𝒟) if the tangent vector γ'(t) ∉𝒟|_γ(t), for every t ∈ J.
The parametrization γ is said to be
positive if ζ(γ'(t) ) > 0, for every t and for every positive contact form compatible
with the CR structure.
From now on, we assume that the parametrization of a transversal curve is positive.
Let γ : J → be a smooth curve.
A lift of γ is a map Γ: J →𝒩 into the nullcone 𝒩⊂^2,1,
such that γ(t) = [Γ(t)], for every t ∈ J.
If Γ is a lift, any other lift is given by rΓ, where r is a smooth complex-valued function,
such that r(t)≠ 0, for every t ∈ J.
From the definition of the contact distribution, we have the following.
A parametrized curve γ : J → is transversal
and positively oriented if and only if
- i⟨Γ,Γ'⟩|_t > 0, for every t∈ J and for every lift Γ.
A frame field along γ : J → is a smooth map A : J →G
such that π_∘ A = γ. Since the fibration π_𝒮
is trivial, there exist frame fields along every transversal curve.
If A=(A_1,A_2,A_3) is a frame field along γ, A_1 is a lift of γ.
Let A be a frame field along γ. Then
A^-1A'=
[ a_1^1 +i b_1^1 -i a_3^2 - b_3^2 a_3^1; a_1^2 +i b_1^2 -2i b_1^1 a_3^2 +ib_3^2; a_1^3 i a_1^2 + b_1^2 -a_1^1 +i b_1^1 ],
where a_1^3 is a strictly positive real-valued function. Any other frame field along γ
is given by à = AX(ρ,θ,v,r), where
ρ (ρ>0), θ, r : J →ℝ, v=p+iq : J→ℂ are smooth functions
and X(ρ,θ,v,r):J →G_0 is as in (<ref>).
If we let
Ã^-1Ã'=
[ ã_1^1 +i b̃_1^1 -i ã_3^2 - b̃_3^2 ã_3^1; ã_1^2 +i b̃_1^2 -2i b̃_1^1 ã_3^2 +i b̃_3^2; ã_1^3 i ã_1^2 + b̃_1^2 -ã_1^1 +i b̃_1^1 ],
then
Ã^-1Ã'= X^ -1A^-1 A'X + X^ -1X',
which implies
ã_1^3= ρ^2 a_1^3,
ã_1^2 +i b̃_1^2 = ρ e^3iθ (a_1^2 +i b_1^2 ) - ρ^2 e^2iθ (p+iq) a_1^3.
From this it follows
that along any parametrized transversal curve there exists
a frame field A for which a_1^2 +i b_1^2=0.
Such a frame field
is said to be of first order.
Let Γ be a lift of a transversal curve γ: J →.
If
(Γ,Γ',Γ”)|_t_0 = 0,
for some t_0∈ J, then γ(t_0) is called a CR inflection point.
The notion of CR inflection point is independent of the lift Γ.
A transversal curve with no CR inflection points is said to be generic.
The notion of a CR inflection point is invariant under reparametrizations and
under the action of the group of CR automorphisms.
If A : J → G is a frame field along a transversal curve γ, then γ(t_0) is a CR inflection point
if (A_1,A_1',A_1”)|_t_0 = 0. A transversal curve all of whose points are CR inflection points is
called a chain. The notion of chain on a CR manifold goes back to Cartan <cit.>
(see also <cit.> and the literature therein).
If γ is transversal and Γ is one of its lifts, then the complex plane [Γ∧Γ']_ t
is of type (1,1) and the set of null complex lines contained in [Γ∧Γ']_ t is a chain
which is independent of the choice of the lift Γ. This chain, denoted by 𝒞_γ |_t, is
called the osculating chain of γ at γ(t). By construction, 𝒞_γ |_t
is the unique chain passing through γ(t) and tangent to γ at the contact point γ(t).
For more details on the CR-geometry of transversal curves in the 3-sphere, we refer to <cit.>.
As a basic reference for transversal knots and their topological invariants in the framework of 3-dimensional
contact geometry, we refer to <cit.> and the literature therein.
§.§ The canonical frame and the local CR invariants
In the following, we will consider generic transversal curves.
Let γ be a generic transversal curve. A lift Γ of γ, such that
(Γ,Γ',Γ”)= -1,
is said to be a Wilczynski lift (W-lift) of γ.
If Γ is a Wilczynski lift, any other is given by ϖΓ, where ϖ∈
is a cube root of unity.
The function
a_γ = i⟨Γ,Γ'⟩^-1
is smooth, real-valued, and independent of the choice of Γ.
We call a_γ the strain density of the parametrized transversal curve γ.
The linear differential form ds = a_γ dt is called the infinitesimal strain.
The strain density and the infinitesimal strain are invariant under the action of the CR transformation group.
In addition, if h : I → J is a change of parameter, then the infinitesimal strains ds and ds̃
of γ and γ̃=γ∘ h,
respectively, are related by ds̃ = h^*(ds).
This proof corrects a few misprints contained in the original one.
If A∈G and if Γ is a Wilczynski lift of γ, then Γ̂ = AΓ is a
Wilczynski lift of γ̂= Aγ. This implies that a_γ = a_γ̂.
Next, consider a reparametrization γ̃=γ∘ h of γ. Then, Γ^*=Γ∘ h
is a lift of γ̃, such that
(Γ^*,(Γ^*)' ,(Γ^*)”) = -(h')^3.
This implies that Γ̃=(h')^ -1Γ^* is a Wilczynski lift of γ̃.
Hence
⟨Γ̃,(Γ̃)'⟩
=(h')^ -1⟨Γ,Γ'⟩∘ h.
Therefore, the strain densities of γ and γ̃ are related by
a_γ̃=h'a_γ∘ h.
Consequently, we have
h^*(d s)= h'a_γ∘ h dt= a_γ̃ dt =ds̃.
As a straightforward consequence of Proposition <ref>, we have the following.
A generic transversal
curve γ can be parametrized so that a_γ = 1.
If a_γ = 1, we say that γ : J → is a natural parametrization,
or a parametrization by the pseudoconformal strain or pseudoconformal parameter.
In the following, the natural parameter will be denoted by s.
We can state the following.
Let γ : J → be a generic transversal curve, parametrized by the natural parameter. There exists a
(first order) frame field
= (F_1, F_2, F_3) : J →G along γ,
such that F_1 is a W-lift and
^-1'=
[ iκ -i τ; 0 -2iκ 1; 1 0 iκ; ]
= : K_κ,τ(s),
where κ, τ : J →ℝ are smooth functions, called the
CR bending
and the CR twist, respectively.
The frame field is called a Wilczynski frame.
If is a Wilczynski frame, any other is given by ϖ,
where ϖ is a cube root of unity.
Thus, there exists a unique frame
field [ℱ] : J → [G] along γ, called the canonical frame of γ.
Given two smooth functions κ, τ:J→, there exists a generic transversal
curve γ:J→,
parametrized by the natural parameter,
whose bending is κ and whose twist is τ.
The curve γ is unique up to CR automorphisms of .
(1) Let γ : J → be as above and = (F_1, F_2, F_3) : J →G be a Wilczynski frame
along γ. Then,
γ^# : J ∋ s↦ [F_3(s)]_∈
is an immersed curve, called the dual of γ.
The dual curve is Legendrian (i.e., tangent to the contact distribution)
if and only if τ = 0. Thus, the twist can be viewed as a measure of how the dual curve differs
from being a Legendrian curve.
(2) Generic transversal curves with constant bending and twist have been studied by the authors in <cit.>.
In the following we will consider generic transversal curves with nonconstant CR invariant functions.
§.§ Discrete CR invariants of a closed transversal curve
Referring to <cit.>, we briefly recall some CR invariants for closed transversal curves, namely
the notions of wave number, CR spin, and CR turning number (or Maslov index). These invariants will be used in Sections <ref> and <ref>.
The wave number is the ratio between the least period ω_γ of γ and the least period ω of
the functions (κ,τ). The CR spin is the ratio between ω_γ and the least period
of a Wilczynski lift of γ.
The CR turning number is
the degree (winding number) of the map
F_1-iF_3: /ω_γ≅S^1 → = ℂ∖{0},
where ℱ=(F_1,F_2,F_3) is a Wilczynski frame along γ.
0.1cm
We will also make use of another invariant.
Let [ z]∈ℂℙ^2 be a spacelike line.
Denote by 𝙲_[ z] the chain of all null lines orthogonal to [ z], equipped with its positive orientation.
Consider a closed generic transversal curve γ with its positive orientation.
Since γ is closed and generic, the intersection of γ with 𝙲_[ z] is either a finite
set of points, or
the empty set.
The trace of γ with respect to [ z], denoted by
tr_[𝐳](γ), is the integer defined as follows: (1) if γ∩𝙲_[ z]≠∅,
then tr_[ z](γ) counts the number of intersection points of γ with 𝙲_[ z]
(since γ is not necessarily a simple curve, the intersection points are counted with their multiplicities); (2)
otherwise, tr_[ z](γ)= Lk(γ, 𝙲_[ z]), the linking number of
γ with 𝙲_[𝐳]. The trace of γ is a G-equivariant map,
that is, tr_[ z](γ) = tr_A[ z](Aγ), for every A∈ G.
§ THE TOTAL CR TWIST FUNCTIONAL
Let 𝔗 be the space of generic transversal curves in 𝒮, parametrized by the natural parameter.
We consider the total CR twist functional 𝒲 : 𝔗→ℝ, defined by
𝒲[γ] = ∫_J_γτ_γ η_γ ,
where J_γ is the domain of definition of the transversal curve γ, τ_γ
is its twist, and η_γ= ds_γ is the infinitesimal strain of γ (cf. Section <ref>).
A curve γ∈𝔗 is said to be a critical curve
in 𝒮 if it is a critical point of
𝒲 when one considers compactly
supported variations through generic transversal curves.
0.1cm
The main result of this section is the following.
Let γ: J → be a generic transversal curve parametrized by the natural parameter.
Then, γ
is a critical curve
if and only if
L' (s)=
[L (s), K_κ,τ(s)] ,
where
L = [ 0 i τ'+ 3(1 - τκ) 2 i τ; τ 0 τ'+ 3i(1 - τκ); 3i -i τ 0; ]
and K_κ,τ is defined as in (<ref>).
The proof of Theorem <ref> is organized in four steps and three lemmas.
0.2cm
Step 1. We show that generic transversal curves are in 1-1 correspondence
with the integral curves of a suitable Pfaffian differential system.
0.1cm
Let γ : J → be a generic transversal curve parametrized by the natural parameter.
According to Proposition <ref>, the canonical frame of γ defines a unique lift
[ℱ] : J → [G].
The map
𝔣 : J∋ s⟼( [(s)], κ(s), τ(s))∈ [G]×^2
is referred to as the extended frame of γ.
The product space M := [G]×^2 is called the configuration space. The coordinates
on ^2 will be denoted by (κ, τ).
With some abuse of notation,
we use α_1^1, β_1^1, α_1^2, β_1^2, α_1^3,
α_3^2, β_3^2, α_3^1 to denote the entries of the Maurer–Cartan form of [G]
as well as their pull-backs on the configuration space M.
By Proposition <ref>, the extended frames of γ are the integral curves of the Pfaffian
differential system (𝒜, η) on M generated by the
linearly independent 1-forms
μ^1=α_1^2, μ^2=β_1^2 , μ^3= α_3^2- α^3_1,
μ^4=β_3^2 ,
μ^5=α_1^1 , μ^6=β_1^1-κα_1^3, μ^7 =α_3^1-τα_1^3,
with the independence condition η :=α_1^3.
If 𝔣 = ( [], κ, τ) : J → M is an integral curve of (𝒜, η), then
γ = [F_1] : J →𝒮 defines a generic transversal curve, such that [] is its canonical frame,
κ its bending and τ its twist.
Accordingly, the integral curves of (𝒜,η) are the extended frames of generic transversal curves
in 𝒮.
Thus, generic transversal curves are in 1-1 correspondence with the integral curves of the Pfaffian system
(𝒜,η) on the configuration space M.
If we put
π^1=dκ, π^2=dτ,
the 1-forms (η,μ^1,…,μ^7,π^1,π^2) define an absolute parallelism on M.
Exterior differentiation and use of the Maurer–Cartan equations of G yield the following structure equations
for the coframe (η,μ^1,,μ^7,π^1,π^2):
dη =2μ^1∧μ^2+2μ^5∧η,
dπ^1=dπ^2=0,
dμ^1=-μ^1∧μ^5+3μ^2∧μ^6+(3κμ^2-μ^3)∧η,
dμ^2=-3μ^1∧μ^6 - μ^2∧μ^3 - (3κμ^1+ μ^4)∧η,
dμ^3=-2μ^1∧μ^2-μ^1∧μ^7+μ^3∧μ^5+3μ^4∧μ^6
- (τμ^1- 3κμ^4+3μ^5)∧η,
dμ^4=-μ^2∧μ^7- 3μ^3∧μ^6+μ^4∧μ^5 - (τμ^2+ 3κμ^3-3μ^6)∧η,
dμ^5=-μ^1∧μ^4+μ^2∧μ^3 + (μ^2-μ^7)∧η,
dμ^6=-2κμ^1∧μ^2-μ^1∧μ^3-μ^2∧μ^4 - (μ^1+ 2κμ^5)∧η - π^1∧η,
dμ^7=-2τμ^1∧μ^2-2μ^3∧μ^4-2μ^5∧μ^7
+ (2μ^4- 2τμ^5)∧η - π^2∧η.
From the structure equations it follows that the derived flag of (𝒜,η) is given by
𝒜_(4)⊂𝒜_ (3) ⊂𝒜_ (2)⊂𝒜_ (1), where
𝒜_(4) = {0}, 𝒜_ (3) = span{μ^1}, 𝒜_ (2) = span{μ^1,μ^2,μ^3},
𝒜_ (1) = span{μ^1, μ^2, μ^3, μ^4, μ^5}.
Thus, all the derived systems of (𝒜,η) have constant rank. For the notion of derived flag, see <cit.>.
0.2cm
Step 2.
We develop a construction due to Griffiths <cit.> on an affine subbundle of T^*(M)
(cf. also <cit.>) in order to derive the Euler–Lagrange equations.
0.1cm
Let 𝒵⊂ T^*(M) be the affine subbundle defined by
the 1-forms μ^1,…,μ^7 and λ := τη, namely
𝒵=λ+span{μ^1,…,μ^7}⊂ T^*(M).
We call 𝒵 the phase space of the Pfaffian system (𝒜,η).
The 1-forms (μ^1,…,μ^7, λ) induce a global affine trivialization of 𝒵, which
may be identified with M ×ℝ^7 by the map
M×ℝ^7 ∋(([ℱ], κ, τ), p_1, …, p_7) ⟼
λ_|_([ℱ], κ, τ)
+ ∑_j=1^7 p_jμ^j_|_([ℱ], κ, τ)∈𝒵 ,
where p_1,…, p_7 are the fiber coordinates of the bundle map 𝒵→ M with respect to the trivialization.
Under this identification, the restriction to 𝒵 of the Liouville (canonical) 1-form of T^*(M) takes the form
ξ=τ η + ∑_j=1^7p_jμ^j .
Exterior differentiation and use of the quadratic equations (<ref>) and (<ref>) yield
dξ ≡π^2∧η
+ 2τμ^5 ∧η + ∑_j=1^7dp_j∧μ^j + p_1(3κμ^2-μ^3)∧η
- p_2(3κμ^1+μ^4)∧η - p_3(τμ^1-3κμ^4+3μ^5)∧η
- p_4(τμ^2+3κμ^3-3μ^6)∧η
+ p_5(μ^2-μ^7)∧η
- p_6(π^1+μ^1+ 2κμ^5)∧η - p_7(π^2-2μ^4+4τμ^5)∧η ,
where the sign `≡' denotes equality modulo the span of {μ^i∧μ^ j}_ i,j = 1,…,7 .
0.1cm
The Cartan system (𝒞(dξ), η) of the
2-form dξ is the Pfaffian system on 𝒵 generated by the 1-forms
{X ⌟ dξ| X ∈𝔛(𝒵) }⊂Ω^1(𝒵),
with independence condition η≠ 0.
0.1cm
By Step 1, generic transversal curves are in 1-1 correspondence with the integral curves of the
Pfaffian system (𝒜,η).
Let 𝔣 : J → M be the extended frame corresponding to the generic transversal curve
γ : J → parametrized by the natural parameter.
According to Griffiths approach to the calculus of variations (cf. <cit.>),
if the extended frame 𝔣
admits a lift y: J →𝒵 to the phase space
𝒵 which is an integral curve of the Cartan system (𝒞(dξ), η),
then γ is a critical curve of the total twist functional with respect to compactly supported variations.
As observed by
Bryant <cit.>, if all the derived systems
of (𝒜,η) are of
constant rank, as in the case under discussion (cf. Remark <ref>), then the converse is also true.
Hence all extremal trajectories arise as projections of integral curves of the Cartan
system (𝒞(dξ), η).
0.1cm
Next, we compute the Cartan system (𝒞(dξ), η).
Contracting the 2-form dξ with the vector fields of the tangent frame
(∂_η,∂_μ^1,…,∂_μ^7 ,
∂_π^1,∂_π^2,∂_ p_1,,∂_ p_7)
on 𝒵, dual to the coframe (η, μ^1,…,μ^7,π^1,π^2,dp_1,…,dp_7),
yields the 1-forms
∂_ p_j⌟ dξ≡μ^j, j=1,…,7,
-∂_π^1⌟ dξ≡ p_6η = :π̇_1,
-∂_π^2⌟ dξ≡ (p_7 -1)η =:π̇_2,
-∂_η⌟ dξ≡ (1 - p_7)π^2 =:η̇,
-∂_μ^1⌟ dξ≡ dp_1 + (3κ p_2+τ p_3+p_6)η =: μ̇^1,
-∂_μ^2⌟ dξ≡ dp_2 - (3κ p_1-τ p_4+p_5)η =: μ̇^2,
-∂_μ^3⌟ dξ≡ dp_3 + (p_1+3κ p_4)η =: μ̇^3,
-∂_μ^4⌟ dξ≡ dp_4 + (p_2-3κ p_3-2p_7)η =:μ̇^4,
-∂_μ^5⌟ dξ≡ dp_5 - (2τ - 3p_3- 2κ p_6 - 4τ p_7)η=: μ̇^5,
-∂_μ^6⌟ dξ≡ dp_6-3p_4η = : μ̇^6,
-∂_μ^7⌟ dξ≡ dp_7 + p_5η =: μ̇^7.
We have proved the following.
The Cartan system (𝒞(dξ), η)
is the Pfaffian system on 𝒵≅ M×ℝ^7 generated by
the 1-forms
{μ^1,…,μ^7,π̇_1,π̇_2,η̇,μ̇^1,…,μ̇^7}
and with independence condition η≠ 0.
0.1cm
Now, the Cartan system (𝒞(dξ), η) is reducible, i.e., there exists a nonempty submanifold
𝒴⊆𝒵, called the reduced space, such that: (1) at each point of 𝒴
there exists an integral element
of (𝒞(dξ), η) tangent to 𝒴; (2) if 𝒳⊆𝒵 is any other
submanifold with the same property of 𝒴, then 𝒳⊆𝒴.
The reduced space 𝒴 is called the momentum space of
the variational problem. Moreover, the restriction of the Cartan system (𝒞(dξ), η) to 𝒴
is called the Euler–Lagrange system
of the variational problem, and will be denoted by (𝒥, η).
A basic result states that the Pfaffian systems (𝒞(dξ), η) and (𝒥, η) have
the same integral curves (cf. <cit.>).
0.2cm
The system (𝒥, η) can be constructed by an algorithmic procedure (cf. <cit.>).
The momentum space 𝒴 is the 11-dimensional submanifold of 𝒵 defined by the
equations
p_7 =1, p_6= p_5 = p_4=0 , p_3 = -2/3τ p_2=2(1-τκ) .
The Euler–Lagrange system (𝒥, η) is the Pfaffian system
on 𝒴≅ M×ℝ, with independence
condition η≠ 0, generated by the 1 forms
μ^1_|𝒴, …, μ^7_|𝒴,
σ_1 = dp_1 + 6 κ (1-τκ)η -2/3τ^2η ,
σ_2 = -2τ dκ -2κ dτ -3kp_1 η ,
σ_3 = - 2dτ + 3 p_1 η .
Let V_1(dξ) ↪ℙ(T(𝒵)) →𝒵 be the totality of 1-dimensional integral elements of
(𝒞(dξ),η). In view of (<ref>), we find that
V_1(dξ)_|_(([ℱ], κ, τ); p_1,…,p_7)≠∅ p_6 =0, p_7 =1.
Thus,
the image 𝒵_1 ⊂𝒵
of V_1(dξ) with respect to the natural projection V_1(dξ) →𝒵, is given by
𝒵_1= { (([ℱ], κ, τ); p_1,…,p_7) ∈𝒵| p_6 =0, p_7 =1 }.
Next, the restriction of μ̇^6 and μ̇^7 to 𝒵_1 take the form μ̇^6=-3p_4η
and μ̇^7=p_5η.
Thus, the second
reduction
𝒵_2 is given by
𝒵_2= { (([ℱ], κ, τ); p_1,…,p_7) ∈𝒵_1 | p_4 = p_5 = 0 }.
Considering the restriction of μ̇^4 and μ̇^5 to 𝒵_2 yields the equations
p_2=2(1-τκ), p_3 = -2/3τ ,
which define the third reduction
𝒵_3. Now, the restriction 𝒞_3(dξ) to
𝒵_3 of the Cartan system 𝒞(dξ) is generated by the 1-forms μ^1, …, μ^7
and
σ_1 = dp_1 + 6 κ (1-τκ)η -2/3τ^2η ,
σ_2 = dp_2 -3kp_1 η = -2τ dκ -2κ dτ -3kp_1 η ,
σ_3 = - 2dτ + 3 p_1 η .
This implies that there exists an integral element of V_1(dξ) over each point of 𝒵_3,
i.e., V_1(dξ)_|p≠∅, for each p∈𝒵_3. Hence, 𝒴 := 𝒵_3
is the momentum space and (𝒥, η) := (𝒞_3(dξ), η) is the
reduced system of
(𝒞(dξ), η).
0.2cm
Step 3. We derive the Euler–Lagrange equations.
0.1cm
By the previous discussion, all the extremal trajectories of arise as
projections of the
integral curves of the Euler–Lagrange system. If y : J →𝒴 is an integral curve
of the Euler–Lagrange system (𝒥, η) and 𝚙𝚛 : 𝒴→
is the natural projection of 𝒴 onto , then γ= 𝚙𝚛∘ y : J →
is a critical curve of the total twist functional with respect to compactly supported variations.
0.1cm
We can prove the following.
A curve y: J →𝒴 is an integral curve of the Euler–Lagrange system (𝒥, η) if and only if
the bending κ and the twist τ of the transversal curve γ = 𝚙𝚛∘ y: J →
satisfy the equations
2κτ' + τκ' = 0 ,
τ” + 9 κ(1 -τκ) - τ^2 = 0 .
If y= (([ℱ], κ, τ); p_1) : J →𝒴 is an integral curve of the Euler–Lagrange system
(𝒥, η),
the projection γ = 𝚙𝚛∘ y is the smooth curve γ(s) = [F_1(s)],
where F_1 is the first column of ℱ.
The equations
μ^1= ⋯= μ^7 =0
together with the
independence condition η≠ 0 tell us that ([ℱ], κ, τ)
is an integral curve of the Pfaffian system (𝒜, η) on the configuration space M.
Hence γ is a generic transversal curve with bending κ, twist τ and ℱ
is a Wilczynski frame along γ.
Next, for the smooth function κ, τ: J →ℝ, let κ', κ” and τ', τ”, etc.,
be defined by
dκ = κ' η, dκ' = κ”η, dτ = τ' η, dτ' = τ”η .
With reference to (<ref>), equation σ_3 = 0 implies
p_1 = 2/3τ' .
Further, σ_2=0 gives
2κτ' + τκ' = 0 .
Finally, equation σ_1 =0 yields
τ” + 9 κ(1 -τκ) - τ^2 = 0 .
Conversely, let γ : J →𝒮 be a generic transversal curve, parametrized by the natural
parameter, satisfying (<ref>) and (<ref>) and let
[ℱ] its canonical frame.
Then,
y(s) = (([ℱ], κ, τ) ; 2/3τ' )
is, by construction, an integral curve of the Euler–Lagrange system (𝒥, η).
0.2cm
Step 4.
We eventually provide a Lax formulation for the Euler–Lagrange equations (cf. (<ref>) and (<ref>))
of a critical curve
γ: J →.
0.1cm
Using the Killing form of 𝔤, the dual Lie algebra 𝔤^* can be identified with 𝔥=i𝔤,
the G-module of traceless selfadjoint endomorphisms of ℂ^2,1.
Under this identification, the restriction to 𝒴 of the tautological 1-form ξ goes over
to an element of 𝔥 which originates the 𝔥-valued function L : J →𝔥
given by
L(s)=
[ 0 i τ'+ 3(1 - τκ) 2 i τ; τ 0 τ'+ 3i(1 - τκ); 3i -i τ 0; ].
A direct computation shows that the Euler–Lagrange equations (<ref>) and (<ref>)
of the critical curve
γ are satisfied if and only if
L' (s)= [L (s), K_κ,τ(s) ] ,
where K_κ,τ is given by (<ref>). This concludes the proof of Theorem <ref>.
As a consequence of Theorem <ref>, we have the following.
Let γ : J → be a generic transversal curve parametrized by the natural parameter.
Let [ℱ] : J → [G] be the canonical frame of γ and let L : J →𝔥
be as in (<ref>). If γ is a critical curve,
the Lax equation (<ref>)
implies that
ℱ(s) L(s) ℱ^-1(s)= 𝔐 , ∀ s∈ J.
where 𝔐 is a fixed element of
𝔥.
The element 𝔐∈𝔥 is called the momentum of
the critical curve
γ.
The characteristic polynomial of the momentum 𝔐 is
-x^3 -6κτ^2 x +54κτ -27κ^2 τ^2 +2τ^3 -3τ'^2 -27 .
The conservation of the momentum along γ
yields the two conservation laws
κτ^2 = c_1 ,
- 18 κτ + 9 κ^2 τ^2 -2/3τ^3 + τ'^2 = C_2 -9 ,
for real constants c_1 and C_2. We let c_2 := C_2-9.
Using this notation,
the (opposite of the) characteristic polynomial of the momentum is
Q(x)=x^3+6c_1x+(27+3c_2).
If c_1≠ 0, the twist and the bending are never zero and the conservation laws can be rewritten as
κ=c_1τ^-2, 3/2τ^2τ'^2=τ^5+3/2c_2τ^2+27c_1τ-27/2c_1^2.
If c_1=0, it can be easily proved that κ=0 and the second conservation law takes the form
τ'^2 =2/3τ^3+c_2.
The pair of real constants c=(c_1, c_2) is called the
modulus of the critical curve
γ.
For the application of Griffiths' approach to
other geometric variational problems, the reader is referred to
<cit.>.
§ THE CR TWIST OF A CRITICAL CURVE
§.§ Phase types
For c=(c_1,c_2)∈^2, we denote by P_ c
the quintic polynomial in principal form given by
P_ c(x)=x^5+3/2c_2x^2+27c_1x-27/2c_1^2
and by Q_ c the cubic polynomial given by
Q_ c(x)=x^3+6c_1x+(27+3c_2).
Excluding the case c=0, P_ c possesses at least a pair of complex conjugate roots.
We adopt the following terminology.
* c∈^2 is of phase type 𝒜 if P_ c has four complex roots a_j± ib_j, j=1,2, 0<b_1<b_2,
and a simple real root e_1;
* c∈^2 is of phase type ℬ if P_ c has two complex roots a ± ib,
b>0, and three simple real roots
e_1<e_2<e_3;
* c∈^2 is of phase type 𝒞 if P_ c has a multiple real root.
In the latter case, two possibilities may occur: (1) P_ c has a double real root and a simple real root; or
(2) P_ c has a real root of multiplicity 5.
By the same letters, we also denote the corresponding sets of moduli of
phase types 𝒜, ℬ, and 𝒞, respectively.
Next, we give a more detailed description of the sets 𝒜, ℬ, and 𝒞.
To this end, we start by defining the separatrix curve. Let
(m,n) be the homogeneous coordinates of ℝℙ^1 and let [(m_*,n_*)] be the point of ℝℙ^1 such that 3m_*^3+6m_*^2n_*+4m_*n_*^2+2n_*^3=0 (i.e., m_*=1 and n_*≈-0.72212).
The separatrix curve Ξ⊂^2 is the image of the parametrized curve
ξ =(ξ_1,ξ_2) :ℝℙ^1∖{[(m_*,n_*)]}→^2, defined by
ξ_1([(m,n)]) =6√(2)mn^4/3(3m^2+2mn+n^2)^4/3/(3m^3+6m^2n+4mn^2+2n^3)^5/3,
ξ_2([(m,n)]) =-36n(3m^2+2mn+n^2)(4m^3+3m^2n+2mn^2+n^3)/(3m^3+6m^2n+4mn^2 + 2n^3)^2.
The map ξ is injective and Ξ has a cusp at
ξ([(1,1)])=(4/5(6/5)^2/3,-48/5)). It is regular elsewhere. In addition,
Ξ has a horizontal inflection point at ξ([(0,1)])=(0,-9). Let J_ξ be the interval
(arctan(n_*),arctan(n_*)+π)≈ (-0.625418,2.51617). Then,
ξ:t∈ J_ξ→ξ(cos(t),sin(t))∈Ξ is another parametrization of Ξ.
The inflection point is ξ(π/2). The “negative part" Ξ_-=Ξ∩{ c∈^2 | c_1<0} of Ξ
is parametrized by the restriction of ξ to Ĵ_ξ=(π/2,π +arctan(n_*)).
The left picture of Figure <ref> reproduces the separatrix curve (in black); the negative part of the separatrix
curve is highlighted in dashed-yellow.
The cusp is the red point and the horizontal inflection point is coloured in green.
The (open) upper and lower domains bounded by the separatrix curve Ξ are denoted by ℳ_±.
In Figure <ref>,
the upper domain ℳ_+ is coloured in three orange tones: orange, dark-orange and light-orange;
the lower domain ℳ_- is coloured in two brown tones: light-brown and brown.
The polynomial P_ c has multiple roots if and only if c∈Ξ∪ Oy, has four complex roots
if and only if c∈ℳ_-∖ (Oy∩ℳ_- ), and has three distinct real roots if and only if
c∈ℳ_+∖ (Oy∩ℳ_+ ). Equivalently,
𝒜=ℳ_-∖ (Oy∩ℳ_- ), ℬ
=ℳ_+∖ (Oy∩ℳ_+ ), 𝒞=Ξ∪ Oy.
First, we prove the following claim.
0.1cm
Claim. P_ c has a double root a_3≠ 0 if and only if c belongs
to the separatrix curve minus the cusp.
0.1cm
Note that c_1≠ 0 (otherwise the double root would be 0). Let a_4 be the other simple real root and b_1+ib_2,
b_1-ib_2, b_2>0, be the two complex conjugate roots.
Since the sum of the roots of P_ c is zero, we have b_1=-1/2(2a_3+a_4).
Since the coefficient of x^3 is zero and b_2>0, we get b_2=√(2a_3^2+a_3a_4+3a_4^2/4).
Expanding (x-a_3)^2(x-a_4)(x-b_1-ib_2)(x-b_1+ib_2)
and comparing the coefficients of the
monomials x^n, n=1,…, 4, with the coefficients of P_ c we may write c_1 and
c_2 as functions of a_3 and a_4,
c_1 =1/27(3a_3^4+6a_3^3a_4+4a_3^2a_4^2+2a_3a_4^3),
c_2 =-2/3(4a_3^3+3a_3^2a_4+2a_3a_4^2+a_4^3).
In addition,
c_1^2=2/27(3a_3^4a_4+2a_3^3a_4^2+a_3^2a_4^3).
Taking into account that a_3≠ 0, it follows that (a_3,a_4) belongs to the algebraic curve
𝙲 (the black curve on the right picture in Figure <ref>) defined by the equation
54y(3x^2+2xy+y^2)-(3x^3+6x^2y+4xy^2+2y^3)^2=0.
Now, consider the line ℓ_m,n through the origin, with homogeneous coordinates (m,n), i.e., the line with parametric
equations p_m,n(t)=(mt,nt).
If (m,n)≠ (1,0) and 3m^3+6m^2n+4mn^2+2n^3≠ 0 (we are excluding the two red lines
on the right picture in Figure <ref>), ℓ_m,n
intersects 𝙲 when t=0 and t=t_m,n,
where
t_m,n=3√(2)√(n(3m^2+2mn+n^2))/√((3m^3+6m^2n+4mn^2+2n^3)^2).
If (m,n)= (1,0)
or 3m^3+6m^2n+4mn^2+2n^3= 0, ℓ_m,n intersects 𝙲 only at the origin
(see the right picture in Figure <ref>). Hence
β : [(m,n)]→ t_m,n· (m,n), [(m,n)]≠ [(1,0)],
3m^3+6m^2n+4mn^2+2n^3≠ 0, is a parametrization of 𝙲∖{(0,0)}.
Thus, using (<ref>), the map
[(m,n)]→ (c_1(β([(m,n])),c_2(β([(m,n]))∈^2
is a parametrization of the set of all c, c_1≠ 0, such that P_ c has multiple roots.
It is now a computational matter to check that
(c_1(β([(m,n])),c_2(β([(m,n]))=ξ([(m,n)]). This proves the claim. It also shows that P_ c
has multiple roots if and only if c∈Ξ∪ Oy.
0.1cm
To prove the other assertions, we begin by observing that the discriminant of the derived polynomial P'_ c
is negative. Hence
P'_ c has two distinct real roots and a pair of complex conjugate roots.
Denote by x'_ c and x”_ c the real roots of P'_ c,
ordered so that
x'_ c<x”_ c. Observe that x'_ c and x”_ c are differentiable functions of c.
Then, P_ c possesses three distinct real roots if and only if x'_ c· x”_ c<0,
one simple real root if and only if x'_ c· x”_ c>0,
and a multiple root if and only if x'_ c· x”_ c=0.
From the first part of the proof, the set of all c∈^2, such that P_ c has only simple roots
is the complement of Ξ∪ Oy. This set has five connected components:
ℳ_+' = { c∈ℳ_+∖ (Oy∩ℳ_+ ) | c_1<0},
ℳ_+” ={ c∈ℳ_+∖ (Oy∩ℳ_+) | c_1>0 and c_2>0},
ℳ_+”' ={ c∈ℳ_+∖ (Oy∩ℳ_+) | c_1>0 and c_2<0},
ℳ_-' ={ c∈ℳ_-∖ (Oy∩ℳ_-) | c_1<0},
ℳ_-” ={ c∈ℳ_-∖ (Oy∩ℳ_-) | c_1>0}.
Referring to
the left picture in Figure <ref>, ℳ_+' is the orange domain, ℳ_+” is the dark-orange domain,
ℳ_+”' is the light-orange domain, ℳ_-' is the light-brown domain, and ℳ_-” is the brown domain.
Consider the following points (the black points in Figure <ref>):
c_1=(-2,1)∈ℳ'_+, c_2=(1/6,8)∈ℳ_+”,
c_3=(1/6,-8)∈ℳ_+”',
c_4=(-6,-9)∈ℳ_-', c_5=(4,-9)∈ℳ_-”.
Using Klein's formulas for the icosahedral solution of a quintic polynomial in principal form (cf. <cit.>),[We used the
Trott and Adamchik code (cf. <cit.>) implementing Klein's formulas in the software Mathematica.] we find that the polynomials
P_ c_j, j=1,2,3, have three distinct real roots and that P_ c_j, j=4,5, have one real root.
The domain ℳ'_+ is connected and the function ℳ'_+ ∋ c↦ x'_ c· x”_ c is differentiable and nowhere zero. Since x'_ c_1· x”_ c_1<0, it follows that
c↦ x'_ c· x”_ c is strictly negative. Then, P_ c has three distinct real roots,
for every c∈ℳ'_+. Similarly, P_ c has three distinct real roots, for every
c∈ℳ”_+∪ℳ”'_+ and a unique real root for every
c∈ℳ'_-∪ℳ”_-. This concludes the proof.
The real roots of P_ c_1 are e_1=-2.44175<e_2=-0.9904<0<e_3=2.87645 and those of
P_ c_2 are e_1=-2.14118<e_2=-0.448099<0<e_3=0.0701938. Instead, the roots of P_ c_3 are
0<e_1=0.12498<e_2=0.250656<2.15383.
Since the product e_2( c) e_3( c) is a continuous function on the connected components
ℳ_+', ℳ_+”, and ℳ_+”', we deduce that the lowest roots of P_ c
are negative if
c∈ℳ_+'∪ℳ_+” and positive if c∈ℳ_+”'.
§.§ Phase curves and signatures
Let Σ_ c be the real algebraic curve defined by y^2= P_ c(x).
We call Σ_ c the phase curve of c.
If c∈𝒜∪ℬ, Σ_ c is a smooth real cycle of a hyperelliptic curve of genus 2.
If c∈𝒞, and c≠ 0, Σ_ c is a singular real cycle of an elliptic curve.
If c=0, Σ_ c is a singular rational curve.
The following facts can be easily verified:
* if c∈𝒜, Σ_ c is connected, unbounded, and intersects the Ox-axis at (e_1,0)
(see Figure <ref>);
* if c∈ℬ, Σ_ c has two smooth connected components, one is compact and the other
is unbounded. Let Σ'_ c be the compact connected component and Σ”_ c be the noncompact one.
Σ'_ c intersects the Ox-axis at (e_1,0) and (e_2,0), while Σ”_ c intersects
the Ox-axis at (e_3,0) (see Figure <ref>);
* if c∈𝒞 and c_1 ≠ 0 , Σ_ c has a smooth, unbounded connected
component Σ”_ c and an isolated singular point (e_1,0), where e_1=e_2 is the double real root
of P_ c(x).
The unbounded connected component intersects the Ox-axis at (e_3,0), where e_3 is the simple real root
of P_ c(x) (see Figure <ref>). If c_1=0 and c_2≠0, Σ_ c is connected, with
an ordinary double point (see Figure <ref>). If c=0, Σ_ c is connected with a cusp
at the origin (see Figure <ref>).
Let γ be a critical curve with nonconstant twist and modulus c. Let J_γ⊂ be
the maximal interval of definition of γ.
With reference to (<ref>), we adapt to our context the terminology used in <cit.>
and call
σ_γ : J_γ→^2, s↦(τ(s),√(3/2) τ(s)τ'(s))
the signature of γ.
From the Poincaré–Bendixson Theorem, it follows that the twist of γ is periodic if and only if
σ_γ( J_γ) is compact. Observing that σ_γ( J_γ) is
one of the 1-dimensional connected components of Σ_ c, we can conclude that the twist is a periodic function if and only if
c∈ℬ and σ_γ( J_γ)=Σ'_ c.
A critical curve γ with modulus c is said to be of type ℬ' if c∈ℬ
and σ_γ( J_γ)=Σ'_ c; it is said to be of type ℬ” if c∈ℬ
and σ_γ( J_γ)=Σ”_ c.
§.§ The twist of a critical curve
§.§.§ The twist of a critical curve of type 𝒜
Let γ be a critical curve of type 𝒜, i.e., with modulus c∈𝒜.
Then P_ c has a unique real root e_1. The polynomial P_ c(x) is positive if x>e_1
and is negative if x<e_1.
Since P_ c(0)=-27c_1^2/2<0, the root is positive.
Let ω_ c>0 be the improper hyperelliptic integral of the first kind defined by
ω_ c = √(3/2)∫_e_1^+∞τ dτ/√( P_ c(τ))>0.
The incomplete hyperelliptic integral
h_ c(τ) = √(3/2)∫_e_1^τu du/√( P_ c(u)), u≥ e_1
is a strictly increasing diffeomorphism of [e_1,+∞) onto [0,ω_ c) (see Figure <ref>).
The twist is the unique even function
τ_ c:(-ω_ c,ω_ c)→, such that τ_ c=h_ c^-1 on [0,ω_ c).
The maximal domain of definition is J_ c=(-ω_ c,ω_ c).
τ_ c is strictly positive, with vertical asymptotes as s→∓ω_ c^± (see Figure <ref>).
Note that τ_ c is the solution of the Cauchy problem
τ”=τ^2-9c_1τ^-2(1-c_1τ^-1), τ(0)=e_1, τ'(0)=0.
§.§.§ The twist of a critical curve of type ℬ'
Let e_1<e_2<e_3 be the simple real roots of P_ c. The highest root e_3 is positive.
The lower roots e_1 and e_2 are either both negative or both positive and P_ c is positive on (e_1,e_2).
Let ω_ c>0 be the complete hyperelliptic integral of the first kind
ω_ c = sign(e_1) √(3/2)∫_e_2^e_1τ dτ/√( P_ c(τ))>0.
Let h_ c be the incomplete hyperelliptic integrals of the first kind
h_ c(τ) = {[ √(3/2)∫_e_2^τu du/√( P_ c(u)), τ∈ [e_1,e_2], e_1<e_2<0,; √(3/2)∫_e_1^τu du/√( P_ c(u)), τ∈ [e_1,e_2], 0<e_1<e_2. ].
The function h_ c is a diffeomorphism of [e_1,e_2] onto [0,ω_ c], strictly decreasing if e_1<e_2<0 and strictly
increasing if 0<e_1<e_2 (see Figure <ref>). The twist τ_ c is the even periodic function with least period 2ω_ c, obtained by extending periodically the function
τ(s) = h_ c^-1(s) defined on [0,ω_ c] and on [-ω_ c,0], respectively.
0.1cm
∙ If e_1<e_2<0, then τ_ c is strictly negative with minimum value e_1 and maximum value e_2, attained, respectively, at s≡ω_ c 2ω_ c and at s≡ 0 2ω_ c (see Figure <ref>).
0.1cm
∙ If 0<e_1<e_2, then τ_ c is strictly positive, with minimum value e_1 and maximum value e_2, attained, respectively, at s≡ 0 2ω_ c and at s≡ω_ c 2ω_ c.
0.1cm
Observe that τ_ c is the solution of the Cauchy problem
(i) τ”=τ^2-9c_1τ^-2(1-c_1τ^-1), τ(0)=e_2, τ'(0)=0, if e_1<e_2<0,
(ii) τ”=τ^2-9c_1τ^-2(1-c_1τ^-1), τ(0)=e_1, τ'(0)=0, if 0<e_1<e_2.
§.§.§ The twist of a critical curve of type ℬ”
The twist of a critical curve of type ℬ” can be constructed as in the case of a critical curve of
type 𝒜.
More precisely, let e_3>0 be the highest real root of P_ c and ω_ c be the
improper hyperelliptic integral of the first kind given by
ω_ c = √(3/2)∫_e_3^+∞τ dτ/√( P_ c(τ))>0.
Let h_ c(τ) be the incomplete hyperelliptic integral
h_ c(τ) = √(3/2)∫_e_3^τu du/√( P_ c(u)), τ≥ e_3.
Then, h_ c is a strictly increasing diffeomorphism of [e_3,+∞) onto [0,ω_ c).
The twist is the unique even function
τ_ c:(-ω_ c,ω_ c)→, such that τ_ c=h_ c^-1 on
[0,ω_ c).
The maximal interval of definition of τ_ c is J_ c=(-ω_ c,ω_ c).
The function τ_ c is positive, with vertical asymptotes as
s→∓ω_ c^±, and
is the solution of the Cauchy problem
τ”=τ^2-9c_1τ^-2(1-c_1τ^-1), τ(0)=e_3, τ'(0)=0.
§.§.§ The twist of a critical curve of type 𝒞 with c_1≠ 0
The twist of a critical curve of type 𝒞,
with c_1≠ 0, can be constructed as for curves of types 𝒜 or ℬ”.
Let e_3>0 be simple real root of
P_ c and ω_ c be the improper elliptic integral of the first kind
ω_ c = √(3/2)∫_e_3^+∞τ dτ/√( P_ c(τ))>0.
Let h_ c(τ) be the incomplete elliptic integral
h_ c(τ) = √(3/2)∫_e_3^τu du/√( P_ c(u)), τ≥ e_3.
Then, h_ c is a strictly increasing diffeomorphism of [e_3,+∞) onto [0,ω_ c).
The twist τ_ c is the unique even function
τ_ c:(-ω_ c,ω_ c)→, such that τ_ c=h_ c^-1 on
[0,ω_ c).
The maximal interval of definition of τ_ c is J_ c=(-ω_ c,ω_ c).
The twist is positive, with vertical asymptotes as s→∓ω_ c^±. Note that τ_ c is the solution of the Cauchy problem
τ”=τ^2-9c_1τ^-2(1-c_1τ^-1), τ(0)=e_3, τ'(0)=0.
§.§.§ The twist of a critical curve with c_1= 0
If c_1=0, the bending vanishes identically and the twist is a solution of the second order ODE τ”-τ^2=0.
Then,
τ(s)=√(6)℘(s+a/√(6)|0,g_3), g_3=-√(2/243) c_2, κ(s)=0,
where a is an unessential constant and ℘(-,g_2,g_3) is the Weierstrass function with invariants g_2, g_3.
§.§ Orbit types and the twelve classes of critical curves with nonconstant twist
The moduli of the critical curves can be classified depending on the properties of the eigenvalues of the momenta.
For 𝐜 = (c_1,c_2) ∈^2, let Δ_1( c)=-27(32c_1^3+9(9+c_2)^2) be the
discriminant of the cubic polynomial Q_ c (cf. (<ref>)).
We say that
c∈^2 is:
* of orbit type 1 (in symbols, c∈ OT_1) if Δ_1( c)>0;
the momentum of a critical curve with modulus c∈ OT_1 has three distinct real eigenvalues:
λ_1=-(λ_2+λ_3)<0<λ_2<λ_3.
*
of orbit type 2 (in symbols, c∈ OT_2) if Δ_1( c)<0;
the momentum of a critical curve
with modulus c∈ OT_2 has a real eigenvalue λ_1 and two complex conjugate roots: λ_2,
with positive imaginary part, and λ_3=λ_2.
* of orbit type 3 (in symbols, c∈ OT_3) if Δ_1( c)=0;
the momentum of a critical curve with modulus c∈ OT_3 has an eigenvalue with algebraic multiplicity
greater than one (>1).
Correspondingly, ^2 is partitioned into nine regions (see Figure <ref>):
𝒜_j=𝒜∩ OT_j, ℬ_j=ℬ∩ OT_j, 𝒞_j=𝒞∩ OT_j, j=1,2,3.
Let γ be a critical curve with modulus c and j∈{1,2,3}.
We say that γ is of type 𝒜_j if c∈𝒜_j;
of type ℬ'_j if c∈ℬ_j
and the image of its signature σ_γ is compact;
of type ℬ”_j if c∈ℬ_j and the image of σ_γ is unbounded;
and
of type 𝒞_j if c∈𝒞_j, j=1,2,3.
The only critical curves with periodic twist are those of the types ℬ'_j, j=1,2,3.
Consequently, critical curves of the other types cannot be closed.
ℬ_1 lies in the half-plane {(c_1,c_2) | c_1<0};
it is bounded below by Ξ'={ c∈Ξ| c_1< 0} and above by
Δ'={ c∈^2|Δ_1( c)=0, c_2> -9}.
The curves Ξ' and Δ' intersect each other tangentially
at c'=(c'_1,c'_2)≈ (-11.339754, 63.004420) (see Figure <ref>).
Thus, ℬ_1 has two connected components:
ℬ_1^-={ c∈ℬ_1 | c_1∈ (c'_1,-9) }, ℬ_1^+={ c∈ℬ_1 | c_1 > c'_1 }.
Referring to Remark <ref>, Ξ' is parametrized by the restriction of ξ to the interval
Ĵ_ξ=(π/2,π +arctan(n_*)). Let t' be the point of
Ĵ_ξ such that ξ(t')= c', (t'≈ 2.3008). Put Ĵ_ξ^-=(π/2,t') and
Ĵ_ξ^+=(t',π +arctan(n_*)). The restriction of ξ to Ĵ_ξ^- is a parametrization of
Ξ_-={ c∈Ξ'| c_1∈ (c'_1,0)} and the restriction to Ĵ_ξ^+ is a parametrization of
Ξ_+={ c∈Ξ'| c_1<c'_1}.
Consequently, ℬ_1^± are parametrized by
ψ_± : K_±∋ (t,s)⟼(ξ(t)-p(t))s+p(t),
where K_± are the rectangles Ĵ_ξ^±× (0,1) and
p(t)=(ξ_1(t),1/3(4√(-2 ξ_1(t)^3)-27)).
§ INTEGRABILITY BY QUADRATURES
§.§ Integrability by quadratures of general critical curves
Let Δ_2 be the polynomial
Δ_2( c)=9c_1^3(c_1^3+216)+6c_1^3c_2(c_2+36)+(c_2+9)(c_2+18)^3.
A critical curve γ with modulus c is said to be general if Δ_1( c)Δ_2( c)≠ 0.
Since Δ_1( c)≠ 0, the momentum 𝔐_γ of a general critical curve γ has
three distinct eigenvalues λ_1, λ_2, λ_3, sorted as in Definition <ref>.
Let J be the maximal interval of definition of the twist (it can be computed in terms of the modulus).
Define y_j: J→^1,2, j=1,2,3, by
𝐲_j=^t(τ(3-iτ')-λ_j^2-3c_1, 9-9c_1/τ-λ_jτ-3iτ', i(τ^2-3λ_j)).
Let V : J→𝔤𝔩(3,) be the matrix-valued map with column vectors y_1, y_2 and y_3.
Let D(z_1,z_2,z_3) denote the diagonal matrix with z_j as the jth element on the diagonal.
Recall that, if c_1≠ 0, then τ is nowhere zero.
0.1cm
We can prove the following.
Let γ: J→ be a general critical curve. The functions det( V) and τ^2-3λ_j,
j=1,2,3, are nowhere zero.
Let r_j be continuous determinations of √(τ^2-3λ_j) and let ϕ_j be the functions
defined by[If c_1≠ 0, the denominator of the integrand in nowhere zero and the ϕ_j are real-analytic.
If c_1=0, the integrand reduces to (3τ-λ_j^2)(3λ_j-τ^2)^-1. Thus, also in this case the functions
ϕ_j are real-analytic.]
ϕ_j(s)=∫_0^s 3c_1λ_j-(4c_1+λ_j^2)τ^2(u)+3τ^3(u)/τ^2(u)(3λ_j-τ^2(u))du.
Then, γ is congruent to
J∋ s ⟼[ M D(r_1e^iϕ_1, r_2e^iϕ_2, r_3e^iϕ_3) V^-1 𝐞_1] ∈,
where M= V(0) D(r_1(0),r_2(0),r_3(0))^-1.
The proof of Theorem <ref> is organized into three lemmas.
The following statements hold true:
* if the momentum has three distinct real eigenvalues, then ±√(3λ_2) and ±√(3λ_3)
cannot be roots of P_ c;
* if the momentum has two complex conjugate eigenvalues and a positive real eigenvalue λ_1, then ±√(3λ_1) cannot be roots of P_ c.
First, note that the image of the parametrized curve
α(t)=(-t(t^3/3+√(3)),t^3(t^3/3+2√(3))-9)
is contained in the zero locus of Δ_2. This can be proved by a direct computation. Secondly,
from the expression of Q_𝐜, it follows that
c_1 =-1/6(λ_2^2+λ_2λ_3+λ_3^2)=-1/6(λ_1^2+λ_1λ_2+λ_2^2),
c_2 =1/3(λ_2^2λ_3+λ_2λ_3^2-27)=1/3(λ_1^2λ_2+λ_1λ_2^2-27).
(1) Suppose that the momentum has three distinct real eigenvalues.
By contradiction, suppose that √(3λ_2) is a root of P_ c. Then
0=-8/3 P_ c(√(3λ_2)) =λ_3^4+2λ_2λ_3^3-(λ_2^2-12√(3)λ_2^1/2)λ_3^2-2(λ_2^3-6√(3)λ_2^3/2)λ_3+
+(λ_2^4-12√(3)λ_2^5/2+108λ_2).
Solving this equation with respect to λ_3, taking into account that λ_3>0, we obtain
λ_3=1/2(-λ_2+√(5λ_2^2-24√(3λ_2))).
Substituting into (<ref>), we find
c_1=√(3λ_2)-1/3λ_2^2, c_2=-9-2√(3)λ_2^3/2+1/3λ_2^3.
Then, c=α(-√(λ_2)). This implies that c belongs to the zero locus of Δ_2,
which is a contradiction.
By an analogous argument, we prove that also -√(3λ_2) cannot be a root of P_ c.
By interchanging the role of λ_2 and λ_3 and arguing as above, it follows that
also ±√(3λ_3) cannot be roots of P_ c.
(2) Next, suppose that the momentum has two complex conjugate eigenvalues and a nonnegative real
eigenvalue λ_1.
Recall that the eigenvalues are sorted so that the imaginary part of λ_2 is positive. By contradiction,
suppose that √(3λ_1) is a root of P_ c. Then,
0=-8/3 P_ c(√(3λ_1)) =λ_2^4+2λ_1λ_2^3-(λ_1^2-12√(3)λ_1^1/2)λ_2^2-2(λ_1^3-6√(3)λ_1^3/2)λ_2+
+(λ_1^4-12√(3)λ_1^5/2+108λ_1).
Solving this equation with respect to λ_2, taking into account that the imaginary part of λ_2 is positive,
we find
λ_2=1/2(-λ_1+√(5λ_1^2-24√(3λ_1))).
Substituting into (<ref>) yields c=α(-√(λ_1)). Thus, c is a root of Δ_2,
which is a contradiction. An analogous argument shows that -√(3λ_1) cannot be a root of P_ c.
This concludes the proof of the lemma.
det( V)(s)≠ 0, for every s∈ J_γ.
Let 𝕃_j be the 1-dimensional eigenspaces of the momentum
𝔐_γ relative to the eigenvalues λ_j.
Let L be as in (<ref>). By Corollary <ref> of Theorem <ref>,
we have ℱ L ℱ^-1=𝔐,
where ℱ is a
Wilczynski
frame field along γ.
Then,
L(s) and 𝔐 have the same eigenvalues. Next, consider the line bundles
Λ_j={(s, y)∈ J_γ×^1,2|L(s) y=λ_j y}, j=1,2,3.
Note that (s, y)∈Λ_j if and only if ℱ(s) y∈𝕃_j. Let y_j, j=1,2,3,
be as in (<ref>). A direct computation shows that L y_j=λ_j y_j. Thus,
y_j
is a cross section of the eigenbundle Λ_j. Hence, det( V)(s)≠ 0 if and only if
y_j(s)≠0⃗, for every s.
0.1cm
Case I: The eigenvalues of the momentum are real and distinct. Let y_j^i, i=1,2,3, denote
the components of y_j. Since λ_1 is negative, it follows from (<ref>) that y^3_1(s)≠ 0,
for every s, and hence 𝐲_1(s)≠0⃗. We prove that y_2(s)≠0⃗.
Suppose, by contradiction, that y_2(s_*)=0⃗, for some s_*∈ J_γ.
From y^1_2(s_*)= y^2_2(s_*)=0, it follows that τ'(s_*)=0.
Hence e:=τ(s_*) is a root of P_ c. From y^3_2(s_*)=0,
it follows that e=±√(3λ_2), which contradicts Lemma B<ref>. An analogous argument leads to
the conclusion that y_3(s)≠0⃗, for every s∈ J_γ.
0.1cm
Case II: The momentum has a real eigenvalue λ_1 and two complex conjugate eigenvalues
λ_2, λ_3 (λ_2 with positive imaginary part).
Since λ_2 and λ_3 have nonzero imaginary parts and τ is real valued,
y_2^3(s)≠ 0 and y_3^3(s)≠ 0, for every s. If λ_1<0, then y_1^3(s)≠ 0,
for every s.
If λ_1≥ 0, suppose, by contradiction, that y_1(s_*)=0⃗.
From y_1^1(s_*)= y^2_1(s_*)=0,
we infer that τ'(s_*)=0. Hence e=τ(s_*) is a root of P_ c.
From y^3_1(s_*)=0, we have e=±√(3λ_1), which contradicts Lemma B<ref>.
We are now in a position to conclude the proof. For j=1,2,3, let w_j be defined by
w_j=ℱ y_j: J_γ→^1,2.
Then, w_j(s)∈𝕃_j and w_j(s)≠0⃗, for every s.
Thus, there exist smooth functions Φ_j: J_γ→, such that
w'_j=Φ_j w_j. From (<ref>), we have
Φ_j y_j= y'_j+K y_j, j=1,2,3,
where
K= [ ic_1τ^-2 -i τ; 0 -2ic_1τ^-2 1; 1 0 ic_1τ^-2; ].
Then, the third component of y'_j+K y_j is equal to
3τ+3c_1λ_jτ^-2-(λ_j^2+4c_1)+iττ'.
Hence, using (<ref>) we obtain
Φ_j=-ττ'/3λ_j-τ^2
+i3c_1λ_j-(4c_1+λ_j^2)τ^2+3τ^3/τ^2(3λ_j-τ^2).
The functions 3λ_j-τ^2, j=1,2,3, are nowhere zero.
The statement is obvious if λ_j is real and negative or complex, with nonzero imaginary part. If
λ_j is real non-negative,
the smoothness of Φ_j implies that ττ'(3λ_j-τ^2)^-1 is differentiable.
Then (3λ_j-τ^2)(s)≠ 0, for every s, such that τ(s)τ'(s)≠ 0. If τ(s)τ'(s)= 0,
it follows that τ(s) is a root of the polynomial P_ c. Therefore, by Lemma B<ref>, we have
that (3λ_j-τ^2)(s)≠ 0.
From (<ref>) we have
∫_0^s Φ_j du =log(√(τ^2-3λ_j))+iϕ_j+b_j, j=1,2,3,
where b_j is a constant of integration,
√(τ^2-3λ_j)
is a continuous determination of the square root of
τ^2-3λ_j
and
log(√(τ^2-3λ_j))
is a continuous determination of the logarithm of
√(τ^2-3λ_j).
Since w'_j=Φ_j w_j, we obtain
ℱ y_j r_j^-1e^-iϕ_j = m_j, j=1,2,3,
where m_j is a constant vector belonging to the eigenspace 𝕃_j of 𝔐. This implies
ℱ= MD(r_1e^iϕ_1,r_2e^iϕ_2,r_2e^iϕ_2) V^-1,
where M is an invertible matrix such that M^-1 𝔐 M=D(λ_1,λ_2,λ_3).
By possibly replacing γ with a congruent curve, we may suppose that
ℱ(0)=
I_3. Then, since ϕ_j(0)=0, we have
M= V(0) D(r_1(0),r_2(0),r_3(0))^-1. This concludes the proof of Theorem B.
§.§ Integrability by quadratures of general critical curves of type ℬ_1'
We now specialize the above procedure to the case of general critical curves of type ℬ_1' (i.e., general critical
curves with modulus c∈ℬ_1 and with periodic twist). Let ℳ_+' be as in
(<ref>).
Since ℬ_1 is contained in ℳ_+', the lowest roots e_1 and e_2 of P_ c are negative,
for every c∈ℬ_1 (cf. Remark <ref>).
Let γ be a general critical curve of type ℬ_1'. The λ_1-eigenspace of the momentum is spacelike.
Let y_j be as in (<ref>). Then ℱ(s) y_j(s) belongs to the
λ_j-eigenspace of 𝔐, for every s∈.
Using the conservation law 3/2τ^2 (τ')^2= P_ c(τ) (cf. (<ref>))
and taking into account that λ_j^3+6c_1λ_j+3(9+c_2)=0, we compute
⟨ℱ y_j,ℱ y_j⟩
= ⟨ y_j, y_j⟩ =3(τ^2-3λ_j)(2c_1+λ_j^2).
Moreover, since λ_1=-(λ_2+λ_3) and c_1=-(λ_2^2+λ_2λ_3+λ_3^2)/6,
we have
2c_1+λ_1^2=1/3(2λ_2+λ_3)(λ_2+2λ_3)>0,
2c_1+λ_3^2=1/3(λ_3-λ_2)(λ_2+2λ_3)>0,
2c_1+λ_2^2=-1/3(λ_3-λ_2)(2λ_2+λ_3)<0.
From the fact that λ_1<0, it follows that ⟨ y_1, y_1⟩ > 0.
This proves that the λ_1-eigenspace of the momentum is spacelike.
There are two possible cases: either the λ_3-eigenspace of 𝔐 is spacelike, or else is timelike.
In the first case, we say that γ is positively polarized, while in the second case, we say that
γ is negatively polarized.
In view of the above lemma, γ is positively polarized if and only if
e_1^2-3λ_3>0
and is negatively polarized if and only if
e_2^2-3λ_3<0.
It is a linear algebra exercise to prove the existence of A∈G, such that
A^-1𝔐A=𝔐_λ_1,λ_2,λ_3, where
𝔐_λ_1,λ_2,λ_3=[ 1/2(λ_2+λ_3) 0 ε i/2(λ_2-λ_3); 0 λ_1 0; - ε i/2(λ_2-λ_3) 0 1/2(λ_2+λ_3); ],
where
ε=± 1
accounts for the polarization of γ (see below).
It is clear that any critical curve of type ℬ'_1 is congruent to a critical curve whose momentum
is in the canonical form 𝔐_λ_1,λ_2,λ_3.
A critical curve of type ℬ'_1 is said to be in a standard configuration if its momentum is in
the canonical form (<ref>). Two standard configurations with the same twist are congruent
with respect to the left action of the maximal compact abelian subgroup
𝕋^2={ A∈ G| A e_2∧ e_2=0}.
Let c∈ℬ_1, such that Δ_1( c)Δ_2( c)≠ 0.
Let e_1<e_2<e_3 be the real roots of P_ c and let
λ_1=-(λ_2+λ_3)<0<λ_2<λ_3 be the roots of Q_ c.
Let τ be the periodic function defined as in the first of the (<ref>) and ϕ_j, j=1,2,3,
be as in (<ref>).
Let ρ_j be the constants
ρ_1 =1/√((2λ_2+λ_3)(λ_2+2λ_3)),
ρ_2 =1/√(2(λ_3-λ_2)(2λ_2+λ_3)),
ρ_3 =1/√(2(λ_3-λ_2)(λ_2+2λ_3))
and z_j be the functions
z_1=ρ_1√(3(λ_2+λ_3)+τ^2) e^iϕ_1,
z_2=ρ_2√(3λ_2-τ^2) e^iϕ_2,
z_3=ρ_3√(3λ_3-τ^2) e^iϕ_3.
0.1cm
Let ε =- sign(e_2^2-3λ_3).
We can state the following.
A general critical curve of type ℬ'_1 with modulus c is congruent to
γ: ∋ s ⟼[^t(z_2+z_3,ε iz_1,-ε i(z_2-z_3))]∈.
In addition, γ is in a standard configuration.
Let γ be a critical curve of type ℬ'_1 with modulus c. Let ℱ be
a
Wilczynski frame along γ. Suppose ε =1 (i.e., ⟨ y_3, y_3⟩<0).
Let u_j be the maps defined by
u_1 =1/√(3)√(2c_1+λ_1^2)√(τ^2-3λ_1) y_1,
u_2 =1/√(3)√(-(2c_1+λ_2^2))√(τ^2-3λ_2) y_2,
u_3 =1/√(3)√(2c_1+λ_3^2)√(τ^2-3λ_3) y_3.
Consider the map
U=( u_3, u_2, u_1):→ GL(3,).
From Theorem <ref>
and Lemma <ref>, we have
* ⟨ u_1, u_1 ⟩= ⟨ u_2, u_2 ⟩=-⟨ u_3, u_3 ⟩
=1, and ⟨ u_i, u_j⟩ = 0, for i≠ j, that is, U(s) is a pseudo-unitary
basis of ^1,2, for every s∈;
*
U^-1LU=D(λ_3,λ_2,λ_1).
Using again Theorem <ref>, we obtain
ℱU D(e^-iϕ_3, e^-iϕ_2 , e^-iϕ_1)=
MD(√(3(2c_1+λ_3^2)), √(-3(2c_1+λ_2^2)), √(3(2c_1+λ_1^2)))^-1,
where the matrix M∈ GL(3,) diagonalizes the momentum of γ, that is,
M^-1𝔐 M =
D(λ_3,λ_2,λ_1).
In particular, the column vectors of the right hand side of (<ref>), denoted by B,
constitutes a pseudo-unitary basis. Let ϵ be the inverse of a cubic root of B.
Then, the column vectors of ϵ B constitute a unimodular pseudo-unitary basis.
Therefore there exists a unique A∈G, such that
ϵA B= B, where
B=[ 1/√(2) -1/√(2) 0; 0 0 i; i/√(2) i/√(2) 0; ].
Then
Aℱ=ϵ^-1 B D(e^iϕ_3, e^iϕ_2, e^iϕ_1)U^-1=
ϵ^-1 BD(e^iϕ_3, e^iϕ_2, e^iϕ_1)D(-1,1,1) ^tU̅ h.
It is now a computational matter to check that the first column vector of the right hand side of (<ref>) is
ϵ^-1 ^t(z_2+z_3, iz_1,- i(z_2-z_3)).
This implies γ=A^-1γ (i.e., γ and γ are congruent to each other).
Taking into account that U^-1LU=D(λ_3,λ_2,λ_1) and using (<ref>), the momentum
of γ is
𝔐= BD(λ_3,λ_2,λ_1) B^-1.
Therefore, the momentum of γ is
𝔐=A BD(λ_3,λ_2,λ_1) B^-1A^-1=
BD(λ_3,λ_2,λ_1) B^-1=𝔐_λ_1,λ_2,λ_3.
This proves that γ is in standard configuration.
If ε =-1 (i.e., ⟨ y_3, y_3⟩>0), considering U=( u_2, u_3, u_1)
and arguing as above, we get the same conclusion.
Theorem <ref> implies that a standard configuration γ
does not pass through the pole
[𝐞_3]
of the Heisenberg projection π_H. Thus γ̌:=π_H∘γ is a transversal curve of ^3,
which does not
intersect the Oz-axis.
Breaking the integrands into partial fractions, the integrals
f_j(τ) =
√(3/2)∫_e_2^τ3c_1λ_j-(4c_1+λ_j^2)
+3τ^3/τ(3λ_j-τ^2)√( P_ c(τ))dτ, j=1,2,3,
can be written as linear combinations of standard hyperelliptic integrals of the first and third kind.
Then ϕ_j is the odd quasi-periodic function with quasi-period 2ω such that
ϕ_j(s)=f_j[τ(s)].
In practice, we compute τ and ϕ_j, j=1,2,3, by numerically solving the following system of ODE,
τ” =τ^2-9c_1τ^-2(1-c_1τ^-1),
ϕ_j' =3c_1λ_j-(4c_1+λ_j^2)τ^2+3τ^3/τ^2(3λ_j-τ^2), j =1,2,3,
with initial conditions
τ(0)=e_2, τ'(0)=0, ϕ_j(0)=0, j=1,2,3.
§.§ Closing conditions
From Theorem <ref>, it follows that a critical curve of type ℬ'_1
is closed if and only if
𝔓_j=1/2πϕ_j(2ω)∈, j=1,2,3.
On the other hand,
1/2πϕ_j(2ω)=1/π∫_e_2^e_1√(3)(3c_1λ_j-(4c_1+λ_j^2)τ^2+3τ^3)/√(2)τ(3λ_j-τ^2)√( P_ c(τ))dτ.
Thus, γ is closed if and only if the complete hyperelliptic integrals on the right hand side of (<ref>) are rational.
For a closed critical curve γ, we put 𝔓_j= q_j=m_j/n_j, where n_j>0 and (m_j,n_j)=1.
We call q_j, the quantum numbers of γ.
By construction, e^i2π𝔓_1, e^i2π𝔓_2 and e^i2π𝔓_3 are the eigenvalues of the monodromy 𝙼_γ=ℱ(2ω)ℱ(0)^-1 of γ.
Since det(𝙼_γ)=1, we have
∑_j=1^3𝔓_j≡ 0, mod ℤ.
Then, γ is closed if and only if two among the integrals 𝔓_j, j=1,2,3, are rational.
The closing conditions can be rephrased as follows. Consider the even quasi-periodic functions ϕ_1, ϕ_3.
Then, the critical curve is closed if and only if the jumps ϕ_j|_0^2ω, j=1,3, are rational.
We now consider an example, which will be taken up again in the last section. Choose
c≈ (-0.8284243304411575,-8.349417691746162)∈ℬ_1^-.
The real roots of the quintic polynomial are
e_1≈ -0.931924<e_2≈ -0.678034<0<e_3≈ 2.79051
and the eigenvalues of the momentum are
λ_1≈ -2.40462<0<λ_2≈ 0.40614<λ_3≈ 1.99848.
The half-period of the twist is computed by numerically evaluating the hyperelliptic integral (<ref>).
We evaluate
τ, ϕ_1, ϕ_2, ϕ_3 by solving numerically
the system (<ref>), with initial conditions (<ref>) on the interval [-4ω,4ω].
Figure <ref> reproduces the graphs of ϕ_1 and ϕ_3 on the interval
[-4ω,4ω] (the graph of the twist was depicted in Figure <ref>). The red point on the Ox-axis is
2ω and the length of the arrows are the jumps ϕ_1|_0^2ω, ϕ_3|_0^2ω.
In this example,
|-1/2π2/15-ϕ_1|_0^2ω|=1.6151· 10^-8, |-1/2π10/21-ϕ_3|_0^2ω|=-4.46887· 10^-8.
So, modulo negligible numerical errors, the corresponding critical curves are closed, with quantum number
q_1=-2/15 and q_3=-10/21.
In the last section we will explain how we computed the modulus. A standard configuration of a curve with modulus
c is represented in Figure <ref>.
§.§ Discrete global invariants of a closed critical curve
Consider a closed general critical curve γ of type ℬ_1',
with modulus c
and quantum numbers q_1=m_1/n_1, q_2=m_2/n_2, q_3=m_3/n_3, q_1+q_2+q_3≡ 0.
The half-period ω of the twist is given by the complete hyperelliptic integral (<ref>).
Let
𝙼_γ=ℱ(ω)ℱ(0)^-1 be the monodromy of γ.
The monodromy does not depend on the choice of the canonical lift.
It is a diagonalizable element of G with eigenvalues e^2π i q_1, e^2π i q_2, and e^2π i q_3.
Thus, 𝙼_γ has finite order n=lcm(n_1,n_3).
The momentum 𝔐_γ has three distinct real eigenvalues,
so its stabilizer is a maximal compact abelian subgroup 𝕋^2_γ≅ S^1× S^1 of G
(if γ is a standard configuration, 𝕋^2_γ=𝕋^2).
Since [𝙼_γ,𝔐_γ]=0, 𝙼_γ∈𝕋^2_γ.
Let s_1, s_3 be the integers defined by n=s_1n_1=s_3n_3. The CR spin of γ is 1/3 if and only if
n≡ 0 3 and m_1s_1≡ m_3s_3≢0 3.
The wave number 𝐧_γ of γ is n if the spin is 1 and n/3 if the spin is 1/3.
Let |[γ]| denote the trajectory of γ.
The stabilizer Ĝ_γ={[A]∈ [G]| [A]· |[γ]| = |[γ]|} is spanned by
[𝔐_γ] and is a cyclic group of order n_γ.
Geometrically, Ĝ_γ is the symmetry group of the critical curve γ.
The CR turning number w_γ is the degree of the map
/2nω ∋ s↦ F_1-i F_3 ∈ := ℂ∖{0},
where the F_j's
are the components of a Wilczynski
frame
along γ.
Without loss of generality, we may suppose that γ is in a standard configuration. From
(<ref>), it follows that w_γ is the degree of
/2nω∋ s↦ z_3∈, if ε_γ = 1,
and is the degree of
/2nω∋ s↦ z_2∈, if ε_γ = -1.
Therefore,
w_γ=
s_3m_3, if ε_γ=1,
s_2m_2, if ε_γ=-1.
A closed critical curve γ has an additional discrete CR invariant, denoted by tr_*(γ),
the trace of γ with respect to
the spacelike λ_1-eigenspace of the momentum.
To clarify the geometrical meaning of the trace, it is convenient to consider a standard configuration.
In this case, 𝕃_1 is spanned by 𝐞_2∈^1,2
and the corresponding chain is the intersection of with the projective line z_2=0. The Heisenberg projection
of this chain is the upward oriented Oz-axis. Thus, tr_*(γ) is the linking number Lk(γ̌,Oz^↑) of the Heisenberg projection of γ with the upward oriented Oz-axis.
Let γ be as above. Then
tr_*(γ)= (q_1-q_3) n_γ, if ε_γ=1,
(q_1-q_2) n_γ, if ε_γ=-1.
Without loss of generality, we may assume that γ is in standard configuration.
The Heisenberg projection of γ is
γ̌=^t(Re(ε iz_1,/z_2+z_3),
Im(ε iz_1,/z_2+z_3), Re(-ε i(z_2-z_3)/z_2+z_3) ).
Since γ̌ does not intersect the Oz-axis, the linking number
Lk(γ̌,Oz^↑) is the degree of
/2 n_γω∋ s ⟼z_1/z_2+z_3∈.
From (<ref>) it follows that this degree is the degree of
f: /2 n_γω∋ s⟼ρ_1√(3(λ_2 + λ_3)+τ^2(s)) e^iϕ_1/ρ_2√(3λ_2-τ^2(s)) e^iϕ_2+ ρ_3√(3λ_3-τ^2(s)) e^i ϕ_3.
Suppose that γ is
negatively
polarized. Then, τ^2-3λ_3<τ^2-3λ_2<0 and 0<τ^2<3λ_2. Therefore,
0<ρ_2√(3λ_2-τ^2)/ρ_3√(3λ_3-τ^2)=
√((3λ_2-τ^2)(λ_2+2λ_3)/(3λ_3-τ^2)(2λ_2+λ_3))≤√(λ_2(λ_2+2λ_3)/λ_3(2λ_2+λ_3))<1.
Thus
f= ρ_1√(3(λ_2 + λ_3)+τ^2)/ρ_3√(3λ_3-τ^2)e^i(ϕ_1-ϕ_3)/1+h e^i(ϕ_2-ϕ_3),
where
h = ρ_2√(3λ_2-τ^2)/ρ_3√(3λ_3-τ^2).
Since 0<h<1, the image of 1+he^i(ϕ_2-ϕ_3)
is a curve contained in a disk of radius <1 centered at (1,0). Hence 1+he^i(ϕ_2-ϕ_3) is null-homotopic in . This implies
deg(f)=1/2π(ϕ_1-ϕ_3)|_0^2 n_γω = n_γ(q_1-q_3).
Suppose that γ is
positively
polarized. Then, τ^2-3λ_2>τ^2-3λ_3>0. In particular, τ^2>3λ_3>0 and
0<ρ_3√(τ^2-3λ_3)/ρ_2√(τ^2-3λ_2)
=√((τ^2-3λ_3)(2λ_2+λ_3)/(τ^2-3λ_2)(λ_2+2λ_3))
< √(2λ_2+λ_3/λ_2+2λ_3)<1.
Then
f= -i ρ_1√(3(λ_2 + λ_3)+τ^2)/ρ_2√(τ^2-3λ_2)e^i(ϕ_1-ϕ_2)/1+h̃ e^i(ϕ_3-ϕ_2),
where
h̃ =ρ_3√(τ^2-3λ_3)/ρ_2√(τ^2-3λ_2).
Since 0<h̃<1, the image of 1+h̃e^i(ϕ_3-ϕ_2)
is a curve contained in a disk of radius <1 centered at (1,0).
Hence 1+h̃e^i(ϕ_3-ϕ_2)
is null-homotopic in .
This implies
deg(f)=1/2π (ϕ_1-ϕ_2)|_0^2 n_γω = n_γ(q_1-q_2).
Summarizing: the quantum numbers of a closed critical curve are determined by the wave number,
the CR spin, the CR turning number, and the trace.
§ EXPERIMENTAL EVIDENCE OF THE EXISTENCE OF INFINITE COUNTABLY MANY CLOSED
CRITICAL CURVES OF TYPE ℬ'_1 AND EXAMPLES
This section is of an experimental nature. We use numerical tools, implemented in the software Mathematica 13.3,
to support the claim that there exist
countably many closed critical curves of type ℬ_1', with moduli belonging to the connected component
ℬ_1^- of ℬ_1 (cf. Remark <ref>).
The same reasoning applies, as well, if the modulus belongs to the other connected component
ℬ_1^+ of ℬ_1.
We parametrize ℬ_1^- by the map ψ_-:K_-→ℬ_1^-, defined in (<ref>),
where K_- is the rectangle Ĵ_ξ^-× (0,1), Ĵ_ξ^-=(π/2, 2.3008).
We take p=(p_1,p_2)∈ K_- as the fundamental parameters.
The modulus 𝐜=(c_1,c_2), the roots e_1<e_2<0<e_3 of the quintic polynomial, and the eigenvalues
λ_1=-(λ_2+λ_3)<0<λ_2<λ_3 of the momentum are explicit functions of the
parameters (p_1,p_2). Let K_-^* be the open set of the general parameters, that is,
K_-^* = { p∈ K_- |Δ_1( ψ_-( p))Δ_2( ψ_-( p))≠ 0}.
The complete hyperelliptic integrals 𝔓_j can be evaluated numerically as functions
of p∈ K_-^*.
Consider the real analytic map
𝔓=(𝔓_1, 𝔓_3): K_-^*→^2.[Actually, 𝔓 is real-analytic on all K_-. Instead, 𝔓_2 has a jump discontinuity at the exceptional locus.]
Choose p_*=(2,1/2)∈ K_-^* and plot the graphs of the functions
f_11(p_1)=𝔓_1(p_1,1/2), f_12(p_2)=𝔓_1(2,p_2),
f_31(p_1)=𝔓_3(p_1,1/2),
and f_32(p_2)=
𝔓_3(2,p_2) (see Figures <ref> and <ref>).
The function f_11 is strictly increasing, while the other three functions are strictly decreasing.
This implies that 𝔓 has maximal rank at p_*.
Thus 𝒫_-=𝔓(K_-) is a set with non empty interior.
In particular 𝒫_-^r:=𝒫_-∩ℚ is an infinite countable set
and, for every q=(q_1,q_2)∈𝒫_-^r, there exists a closed critical curve of
type ℬ'_1^- with quantum numbers q_1 and q_2.
Figure <ref> reproduces the plot of the map 𝔓, an open convex set.
The mesh supports a stronger conclusion: the map 𝔓 is 1-1. Therefore,
one can assume
that, for every rational point (q_1,q_3)∈𝒫_-, there exists a
unique congruence class of closed critical curves with quantum numbers q_1 and q_3.
The construction of a standard configuration of a critical curve associated to a rational point
q∈𝒫_- can be done in three steps.
0.1cm
Step 1. Choose a rational point q=(q_1,q_3)=(m_1/n_1,m_3/n_3)∈𝒫_-.
To find the parameter p∈ K_-, such that
𝔓( p)= q, we may procede as follows: plot the level curves 𝚇_q_1=𝔓_1^-1(q_1)
and 𝚈_q_3=𝔓_3^-1(q_3) and choose a small rectangle R⊂ K_-
containing 𝚇_q_1∩𝚈_q_2 (see Figure <ref>). Then we minimize numerically the function
δ_ q: R∋ p⟼√((𝔓_1( p)-q_1)^2+(𝔓_3( p)-q_3)^2).
We use the stochastic minimization method named “differential evolution" <cit.> implemented in Mathematica.
Let us revisit Example <ref>.
Choose q=(-2/15,-10/21)∈𝒫_-. The plot of the level curves 𝚇_q_1 and 𝚈_q_3
is depicted in Figure <ref>. Minimizing δ_ q on the rectangle R=[1.83,1.86]× [0.65,0.75]
(depicted on the right picture in Figure <ref>) we obtain p = (1.84438,0.719473)
and δ_ q( p)=3.26867· 10^-9.
So, up to negligible numerical errors, we may assume
p=𝔓^-1( q). Computing ψ_-( p), we find the modulus c=(c_1,c_2)
of the curve, where c_1=-0.828424 and c_2= -8.349418. With the modulus at hand, we compute
the lowest real roots of the quintic polynomial, e_1≈ -0.931924<e_2≈ -0.678034, and the
roots of the momentum, namely
λ_1≈ -2.40462<0<λ_2≈ 0.40614<λ_3≈ 1.99848.
Step 2.
We evaluate numerically the integral (<ref>) and we get the half-period ω of the twist of the critical curve.
In our example ω≈ 0.732307. The next step is to evaluate the twist τ.
This can be done by solving numerically the Cauchy problem (<ref>) on the interval [0,2nω],
n=lcm(n_1,n_2).
The bending is given by κ=c_1/τ^2. Next, we solve the Frenet type linear system (<ref>),
with initial condition ℱ(0)=I_3. Then, γ: [0,2nω]∋ s⟼ [F_1(s)]∈ is a critical
curve with quantum numbers q_1 and q_3 and ℱ is a
Wilczynski frame
field along
γ. However, γ is not in a standard configuration.
0.1cm
Step 3.
The last step consists in bulding the standard configuration. The momentum 𝔐
of γ is L(0), where L is as in (<ref>). Taking into account that τ(0)=e_2,
τ'(0)=0, and that κ(0)=c_1/e_2^2,
we get
𝔐=[ 0 3(1 - c_1/e_2) 2 i e_2; e_2 0 3i(1 - c_1/e_2); 3i -i e_2 0; ].
The eigenspace of the highest eigenvalue is timelike (i.e., these critical curves are
negatively
polarized).
We compute the eigenvectors and we build a unimodular pseudo-unitary basis A=(A_1,A_2, A_3),
such that A_1 is an eigenvector of λ_3, A_2 is an eigenvector of λ_2, and A_3
is an eigenvector of λ_1.
Let B be as in (<ref>). Consider M= BA^-1∈G.
Then, γ= Mγ is a standard configuration of a critical curve
with quantum numbers q_1 and q_3.
The curve γ does not pass through the pole of the Heisenberg projection π_H. So,
γ=π_H∘γ is a closed transversal curve of ^3 which does not intersect the Oz-axis
and tr_*(γ)=
Lk(γ,Oz).
Applying Step 2 and Step 3 to Example <ref> and computing the Heisenberg projection,
we obtain the transversal curve depicted in Figure <ref>, a non-trivial transversal knot.
The quantum numbers are q_1=-2/15 and q_2=-10/21. Recalling what has been said about the discrete
invariants of a critical curve (cf. Section <ref>), the spin is 1/3, the wave number is n=35, the
CR turning number is -50, and the trace is 12.
Figures <ref> and <ref> reproduce the Heisenberg projections of the
standard configurations of critical curves
of type ℬ'_1^- with quantum numbers (-3/10, -9/25), (-1/5,-3/7), (5/49, -4/7), and (-7/36,-23/54),
respectively. All of them have spin 1. The first is a trivial torus knot with wave number n=50, tr_*=3, and
CR turning number w=-18; the second example is a nontrivial transversal knot with n=35, tr_*=8, and
w=-15. The third example is a “tangled" transversal curve with n=49, tr_*=33, and w=-28.
The last example is a nontrivial transversal torus knot with wave number 108, tr_*=25, and w=-46.
It is clear that, being numerical approximations, the parametrizations obtained with this procedure are
only approximately periodic.
amsalpha
AA
Benn1983
D. Bennequin,
Entrelacements et équations de Pfaff,
in Third Schnepfenried geometry conference, Vol. 1 (Schnepfenried, 1982), 87–161,
Astérisque, 107-108, Soc. Math. France, Paris, 1983.
Bryant1987
R. L. Bryant,
On notions of equivalence of
variational problems with one independent variable,
Contemp. Math. 68(1987), 65–76.
CI
A Calini and T. Ivey,
Integrable geometric flows for curves in pseudoconformal S^3,
J. Geom. Phys. 166 (2021), Paper No. 104249, 17 pp.
Cartan1932
E. Cartan,
Sur la géométrie pseudo-conforme des hypersurfaces de deux variables complexes, I,
Ann. Math. Pura Appl. (4) 11 (1932), 17–90 (or Oeuvres II, 2, 1931-1304).
Cartan1932-2
E. Cartan,
Sur la géométrie pseudo-conforme des hypersurfaces de deux variables complexes, II,
Ann. Scuola Norm. Sup. Pisa (2) 1 (1932), 333–354
(or Oeuvres III, 2, 1217-1238).
ChMo1974
S. S. Chern and J. K. Moser,
Real hypersurfaces in complex manifolds,
Acta Math. 133 (1974), 219–271.
COST
E. Calabi, P. J. Olver, C. Shakiban, A. Tannenbaum, and S. Haker,
Differential and numerically invariant signature curves applied to object recognition,
Int. J. Comput. Vis., 26 (1998), 107–135.
DMN
A. Dzhalilov, E. Musso, and L. Nicolodi,
Conformal geometry of timelike curves in the (1+2)-Einstein universe,
Nonlinear Anal. 143 (2016), 224–255.
Eliash1993
Y. Eliashberg,
Legendrian and transversal knots in tight contact 3-manifolds,
in Topological Methods in Modern Mathematics (Stony Brook, NY, 1991), 171–193,
Publish or Perish, Houston, TX, 1993.
EMN-JMAA
O. Eshkobilov, E. Musso, and L. Nicolodi,
The geometry of conformal timelike geodesics in the Einstein universe
J. Math. Anal. Appl. 495 (2021), no. 2, Paper No. 124730, 32 pp.
Etn1999
J. B. Etnyre,
Transversal torus knots,
Geom. Topol. 3 (1999), 253–268.
EtHo
J. B. Etnyre and K. Honda,
Knots and contact geometry I: torus knots and the eight knot,
J. Symplectic Geom. 1 (2001), 63–120.
Et2
J. B. Etnyre,
Legendrian and transveral knots,
in Hanbook of Knot Theory, 105–185, W. Menasco & M. Thistlethwaite (Eds.), Elsevier B. V., Amsterdam, 2005.
ArXiv version: .
Et3
J. B. Etnyre, Introductory Lectures on Contact Geometry, .
FelsOlver1
M. Fels and P. J. Olver
Moving coframes. I. A practical algorithm,
Acta Appl. Math. 51 (1998), 161–213.
FelsOlver2
M. Fels and P. J. Olver
Moving coframes. II. Regularization and theoretical foundations,
Acta Appl. Math. 55 (1999), 127–208.
FuTa1997
D. Fuchs and S. Tabachnikov,
Invariants of Legendrian and transverse knots in the standard contact space,
Topology 36 (1997), no. 5, 1025–1053.
GM
J. D. Grant and E. Musso,
Coisotropic variational problems,
J. Geom. Phys. 50 (2004), 303–338.
Gr
P. A. Griffiths,
Exterior differential systems and the calculus of variations,
Progress in Mathematics, 25, Birkhäuser, Boston, 1982.
Ho
W. C. Hoffman,
The visual cortex is a contact bundle. Mathematical biology,
Appl. Math. Comput. 32 (1989), no. 2-3, 137–167.
Hsu
L. Hsu,
Calculus of variations via the Griffiths formalism,
J. Differential Geom. 36 (1992), 551–589.
Jacobo1985
H. Jacobowitz,
Chains in CR geometry,
J. Differential Geom. 21 (1985), no. 2, 163–194.
K
F. Klein,
Vorlesungen über das Ikosaeder und die Auflösung der Gleichungen vom fünften Grade,
Teubner, Leipzig, 1884.
KRV
I. A. Kogan, M. Ruddy, and C. Vinzant,
Differential signatures of algebraic curves,
SIAM J. Appl. Algebra Geom. 4 (2020), no. 1, 185–226.
M
E. Musso,
Liouville integrability of a variational problem for Legendrian curves in the three-dimensional sphere,
Quaderni di Matematica, Ser. Ed. by Dip. Matem. II Università di Napoli (Caserta), 9 (2002).
MN-CQG
E. Musso and L. Nicolodi,
Closed trajectories of a particle model on null curves in anti-de Sitter 3-space,
Classical Quantum Gravity 24 (2007), no. 22, 5401–5411.
MN-SIAM
E. Musso and L. Nicolodi,
Reduction for constrained variational problems on 3-dimensional null curves,
SIAM J. Control Optim. 47 (2008), no. 3, 1399–1414.
MNJMIV
E. Musso and L. Nicolodi,
Invariant signatures of closed planar curves,
J. Math. Imaging Vision 35 (2009), 68–85.
MN-CAG
E. Musso and L. Nicolodi,
Quantization of the conformal arclength functional on space curves,
Comm. Anal. Geom. 25 (2017), no. 1, 209–242.
MNS-Kharkiv
E. Musso, L. Nicolodi, and F. Salis,
On the Cauchy-Riemann geometry of transversal curves in the 3-sphere,
Zh. Mat. Fiz. Anal. Geom. 16 (2020), no. 3, 312–363.
MS
E. Musso and F. Salis,
The Cauchy–Riemann strain functional for Legendrian curves in the 3-sphere,
Ann. Mat. Pura Appl. (4) 199 (2020), 2395–2434.
Na
O. Nash,
On Klein's icosahedral solution of the quintic,
Expo. Math. 32 (2014), no. 2, 99–120.
Olver-book1
P. J. Olver,
Applications of Lie Groups to Differential Equations,
Second Edition, Graduate Texts in Mathematics, vol. 107, Springer-Verlag, New York, 1993.
Olver-book2
P. J. Olver,
Equivalence, Invariants, and Symmetry,
Cambridge University Press, Cambridge, UK, 1995.
Pe
J. Petitot,
Elements of Neurogeometry, Lecture Notes in Morphogenesis,
Springer International Publishing, 2017.
SP
R. Storn and K. Price,
Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,
J. Global Optim. 11 (1997), no. 4, 341–359.
Tr
M. Trott and V. Adamchik,
Solving the quintic with Mathematica.
Available on the Wolfram Library Archive: ] |
http://arxiv.org/abs/2307.05227v1 | 20230711124812 | Geometrically Parametrised Reduced Order Models for the Study of Hysteresis of the Coanda Effect in Finite-elements-based Incompressible Fluid Dynamics | [
"J. R. Bravo",
"G. Stabile",
"M. Hess",
"J. A. Hernandez",
"R. Rossi",
"G. Rozza"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
upc,cimne]J.R. Bravocor1
[email protected]
urbino]G. Stabile
sissa]M. Hess
upc-terrasa,cimne]J.A. Hernandez
upc,cimne]R. Rossi
sissa]G. Rozza
[cor1]Corresponding author
[upc]Universitat Politècnica de Catalunya, Department of Civil and Environmental Engineering (DECA), Barcelona, Spain
[sissa]International School for Advanced Studies, SISSA Mathematics Area (mathLab), Trieste, Italy
[urbino]University of Urbino, Department of Pure and Applied Sciences (DISPEA), Urbino, Italy
[cimne]Centre Internacional de Mètodes Numèrics en Enginyeria (CIMNE), Barcelona, Spain
[upc-terrasa]Universitat Politècnica de Catalunya, E.S. d'Enginyeries Industrial, Aeroespacial i Audiovisual de Terrassa (ESEIAAT), Terrassa, Spain
This article presents a general reduced order model (ROM) framework for addressing fluid dynamics problems involving time-dependent geometric parametrisations. The framework integrates Proper Orthogonal Decomposition (POD) and Empirical Cubature Method (ECM) hyper-reduction techniques to effectively approximate incompressible computational fluid dynamics simulations. To demonstrate the applicability of this framework, we investigate the behavior of a planar contraction-expansion channel geometry exhibiting bifurcating solutions known as the Coanda effect. By introducing time-dependent deformations to the channel geometry, we observe hysteresis phenomena in the solution.
The paper provides a detailed formulation of the framework, including the stabilised finite elements full order model (FOM) and ROM, with a particular focus on the considerations related to geometric parametrisation. Subsequently, we present the results obtained from the simulations, analysing the solution behavior in a phase-space for the fluid velocity at a probe point, considered as the Quantity of Interest (QoI). Through qualitative and quantitative evaluations of the ROMs and hyper-reduced order models (HROMs), we demonstrate their ability to accurately reproduce the complete solution field and the QoI.
While HROMs offer significant computational speedup, enabling efficient simulations, they do exhibit some errors, particularly for testing trajectories. However, their value lies in applications where the detection of the Coanda effect holds paramount importance, even if the selected bifurcation branch is incorrect. Alternatively, for more precise results, HROMs with lower speedups can be employed.
Coanda Effect, Hysteresis, Geometric Parametrisation, ECM Hyper-reduction, Proper Orthogonal Decomposition
§ INTRODUCTION
This study focuses on projection-based reduced order models for fluid dynamics problems with time-dependent geometric parametrisations. We solve the Navier-Stokes equations on a parametrised domain denoted as Ω(μ)⊂ℝ^d , with a time span [0,T]. Later on, in section <ref>, the specific form of the equations is shown. The geometric parametrisation is defined via a mapping φ(μ) : Ω_0 ×𝒫→Ω, such that, given a parameter μ∈𝒫⊂ℝ^p, each point in the original configuration is mapped onto a corresponding point in the deformed one as in Fig. <ref>.
The particular case we are focusing on in this study involves a planar contraction-expansion channel. This simple geometry has been widely used in both experimental and numerical investigations <cit.>. This case was chosen due to its tendency to exhibit complex dynamics, even at relatively low Reynolds numbers. A noteworthy characteristic of the model is that it leads to the emergence of the so-called Coanda effect <cit.>, which is characterised by an asymmetric jet adhering to an adjacent wall.
The inherent nonlinearity of the Navier-Stokes equations can give rise to a bifurcation phenomenon in the solution. Within a certain range of values for the driving parameter, typically the Reynolds number in most computational fluid dynamics (CFD) problems, a unique solution exists. However, once a critical point is surpassed, it is possible for multiple solutions to coexist for the same Reynolds number value <cit.>.
In this investigation, the variation in geometry acts as the sole driving factor for the Reynolds number. The viscosity and density of the fluid remain constant throughout the study, and once the fluid is initialised, the boundary conditions remain unchanged. In this setting, as the width of the channel narrows, the Reynolds number increases.
The fluid development in a contraction-expansion channel can be described as follows:
* For sufficiently small values of the Reynolds number, there exists a single perfectly symmetric solution, exhibiting symmetry in both the horizontal and vertical directions.
* As the Reynolds number increases, the horizontal symmetry of the solution is lost, but the solution remains symmetric about the vertical axis. This results in the formation of a symmetric jet.
* At a critical Reynolds number value, denoted as Re_ SB (see Fig. <ref>), the symmetry of the jet is disrupted due to the Coanda Effect. In this scenario, the jet attaches itself to either the upper or lower wall. While the symmetric jet solution is still mathematically possible, it becomes unstable. Numerical simulations or experiments typically yield one of the non-symmetric solutions unless advanced techniques are employed to extract the unstable solution, for instance, when constructing the complete bifurcation diagram <cit.>.
* With further increases in the Reynolds number, the presence of eddies becomes prominent. At a specific critical Reynolds number, denoted as Re_ H, a Hopf bifurcation occurs, resulting in the emergence of an oscillating solution. However, the discussion of the Hopf bifurcation lies beyond the scope of this work. Interested readers are referred e.g. to <cit.> for further exploration of Hopf bifurcations.
The contraction-expansion channel has previously been utilised in research on mitral valve regurgitation disease, as demonstrated in studies such as <cit.>. This disease is characterised by the improper closure of the mitral valve, as depicted in Fig <ref>. As a result of this faulty closure, blood is able to flow through a narrow opening, giving rise to the wall-hugging behavior mentioned earlier.
Previous numerical studies on the contraction-expansion channel have utilised an affine mapping φ(μ), in conjunction with the spectral element method (SEM)<cit.>. The affine nature of the geometric mapping in that study facilitated a complete online-offline decoupling of the reduced-order models, eliminating the need for additional interpolation and achieving significant speedup factors for the reduced-order models.
In a subsequent study <cit.>, a nonlinear geometric mapping was introduced to account for walls with varying curvature. To effectively evaluate the reduced-order models in this scenario, the discrete empirical interpolation (DEIM) method was employed <cit.>.
More recently, the fluid-structure interaction problem has been investigated by incorporating (hyper)elastic walls <cit.>. In that case, the deformation of the walls was computed from their interaction with the flow, rather than being imposed. For the present work, we will adhere to the approach of explicitly imposing the walls deformation.
Furthermore, in <cit.> a non-intrusive neural network framework, known as POD-NN, was employed for efficiently approximating the bifurcating phenomena in a contraction-expansion channel, as well as in other geometries.
In the aforementioned studies, the main objective was to construct the bifurcation diagram. To achieve this goal, steady-state solutions were computed for each parameter variation. This means that even when the geometry was parametrised, the geometric parametrisation remained unchanged over time. The focus was on capturing the steady-state behavior of the system and analyzing the bifurcation phenomena, rather than considering time-dependent variations in the geometric parametrisation.
Time dependent geometry variations for studying wall-hugging phenomena in flow jets have been investigated for example in <cit.>. In that work, both numerical and experimental setups were applied to study the Coanda effect on jets interacting with inclined planes. The parameter to vary was the angle of the plate at the exit of the jet. It was observed that as the inclination angle of the planes was varied, the jet eventually attached, detached, or re-attached to either one of the walls, depending on whether the inclination angle was being increased or decreased. Thus, resulting on a hysteresis loop. This hysteresis phenomena in fluid problems exhibiting bifurcations had already been reported in classical works <cit.>, and more recently in <cit.>
Our main contributions in this study expand on the existing literature by considering a contraction-expansion channel geometry that undergoes time-dependent deformations. By incorporating this dynamically morphed geometry, we are able to observe hysteresis phenomena in the solution, which has been previously documented in works such as <cit.>.
Moreover, the framework presented in this work offers broad applicability to a range of geometrically parametrised ROMs. This versatility allows for the framework to be employed in various scenarios. Unlike previous approaches that rely on assumptions about the nature of the mapping φ(μ) to achieve efficient offline-online decoupling, as seen in works like <cit.>, we achieve efficient offline-online decoupling through the use of the empirical cubature method (ECM) <cit.>. ECM is a mesh-sampling and weighting hyper-reduction technique that has been successfully employed in several studies to attain high computational efficiency.
This paper is structured as follows. In Chapter <ref>, we present the formulation of the general framework for the fluid problem, including the formulations for the reduced and hyper-reduced order models. Section <ref> provides a detailed description of the finite element model utilised in this study, along with the two different geometric mappings employed. One mapping involves an affine transformation with straight walls, while the other employs a nonlinear mapping with curved walls. The obtained results are presented in Section <ref>. Finally, in Section <ref>, we discuss the conclusions drawn from this study and provide insights into future research directions.
§ FORMULATION
In order to maintain this paper as self-contained as possible, this section presents a concise overview of the stabilised finite element discretisation of the governing equations employed in this study. We then introduce the Galerkin Proper Orthogonal Decomposition (POD-Galerkin) and the Empirical Cubature Method (ECM) hyper-reduction techniques, highlighting the specific considerations related to the geometrically parametrised problem under investigation. We conclude the chapter by providing a summary of the simulation workflow for all the models employed in this study.
§.§ Governing Equations
As mentioned earlier in the introduction, we now proceed to explicitly state the fluids problem. The governing equations consirered are the standard Differential-Algebraic incompressible Navier-Stokes equations in an Arbitrary Lagrangian Eulerian (ALE) frame of reference
{[ ∂u/∂ t+ ( c·∇) u - b - ∇·σ = 0 in Ω(μ) × (0,T] ,; ∇·u = 0 in Ω(μ) × (0,T] , ].
where u is the velocity, b are the body forces, σ = -p I + 2 ν∇^s u is the Cauchy stress tensor, p is the pressure, ν is the kinematic viscosity, c= u - u is the convective velocity, and u is the so-called mesh velocity which results from the deformation of the domain and is a datum to the fluids problem. The governing equations, taking into account the provided expressions, can be written as follows:
{[ ∂u/∂ t+ ∇· ( c⊗u) - b + ∇ p - ∇·(2 ν∇^s u ) = 0 in Ω(μ) × (0,T] ,; ∇·u = 0 in Ω(μ) × (0,T] . ].
The initial conditions g_0, and Dirichlet and Neumann boundary conditions g_D and g_N are case-specific and are specified by the corresponding functions, as:
[ (u, p) = g_0(u, p, x) in Ω(μ) ×{0} ,; ( 2 ν∇^s u -p I)n = g_N(u, p, x) on ∂Ω_N(μ)×(0,T] ,; (u, p) = g_D(u, p, x) on ∂Ω_D(μ)×(0,T] , ]
with ∂Ω_N(μ)∪∂Ω_D(μ) = ∂Ω(μ) and ∂Ω(μ)_N ∩∂Ω(μ)_D = ∅.
§.§ Full Order Model FOM
Let us introduce the functional spaces to pose the weak formulation for the space discretisation as
𝒱 := {v∈ H^1 (Ω) |v = 0 on ∂Ω_D } , 𝒬 := L^2(Ω) .
In what follows, we will use the standard L^2(Ω) inner product notation ( · , · )_Ω for brevity. The weak form of the governing equations in Eq. <ref> is then given by:
{[ (∂u/∂ t , v)_Ω - ( c⊗u , ∇v)_Ω + 2 ν( ∇^s u , ∇^s v)_Ω - ( p , ∇·v)_Ω - ( b , v)_Ω = 0 ∀v∈𝒱 ,; ( q , ∇·u)_Ω = 0 ∀ q ∈𝒬 . ].
We define the finite element spaces 𝒱^h ⊂𝒱 and 𝒬^h ⊂𝒬. These spaces are spanned respectively by a set of basis functions {ψ_i}_i=1^N_v and {ψ̂_j}_j=1^N_p, being dim𝒱^h = N_v, and dim𝒬^h = N_p. The finite elements weak form of the governing equations can be expressed as follows:
{[ (∂u^h/∂ t , v^h )_Ω - ( c^h ⊗u^h , ∇v^h)_Ω + 2 ν( ∇^s u^h , ∇^s v^h)_Ω - ( p^h , ∇·v^h )_Ω - ( b , v^h )_Ω = 0 ∀v^h ∈𝒱^h ,; ( q^h , ∇·u^h )_Ω = 0 ∀ q^h ∈𝒬^h , ].
where c^h = u^h - u^h is the finite elements convective velocity.
To ensure the well-posedness of the problem, the selected spaces for the approximation of pressure and velocity should be compatible and satisfy the Ladyzhenskaya–Babuška–Brezzi (LBB) condition <cit.>. The LBB condition can be stated as follows:
inf_q^h ∈𝒬^h sup_v^h ∈𝒱^h ( q^h, ∇·v^h )/q^h_𝒬^hv^h_𝒱^h≥α > 0 ∃ (u^h ∈𝒱^h, q∈𝒬^h) .
LBB compatible spaces necessarily comply with
dim𝒬^h ≤dim𝒱^h ⇐∃ (u^h ∈𝒱^h, q∈𝒬^h) ,
however, in this work, we aim to use the same order of interpolation for the finite element approximation of both variables. To overcome the limitation imposed by the LBB condition, we employ the Variational Multiscale (VMS) stabilisation technique <cit.>.
§.§.§ Variational Multiscale VMS Stabilisation
The VMS approach introduces subgrid spaces 𝒱' and 𝒬', defined as:
𝒱 = 𝒱^h ⊕𝒱' , 𝒬 = 𝒬^h ⊕𝒬' ,
where ⊕ denotes the direct sum. The complementary velocity and pressure functions (u', p') are such that
u = u^h + u' , u^h ∈𝒱^h , u ' ∈𝒱' ,
p = p^h + p' , p^h ∈𝒬^h , p ' ∈𝒬' ,
u' = 0 on ∂Ω_D(μ) , u^h = g_D^h on ∂Ω_D(μ) .
In VMS, the complementary functions are modelled as
u' = - τ_M r_M (u^h, p^h) ,
p' = - τ_C r_C (u^h) ,
where the terms τ_M and τ_C represent respectively the momentum and mass conservation residuals defined as
[ r_M (u^h, p^h) = ∂u^h/∂ t + ∇· (c^h ⊗u^h) - ∇· (2 ν∇^s u^h) + ∇ p^h - b ,; r_C (u^h) = ∇·u^h . ]
The modelling of the complementary functions with respect to the residuals ensures the consistency of the method, moreover the selection of τ_M and τ_C depends on the specific type of VMS to apply. For example, considering τ_M, τ_C = 0 leads to the Galerkin method. In the case of the quasi-static variational multiscale (QSVMS), which we employ in this study, the time evolution of the complementary functions is neglected <cit.>.
To proceed with an algebraic formulation, we consider a spatial tessellation of the domain Ω into N_el finite elements, such that ⋃_e=1^N_elΩ_e = Ω; moreover we specify standard finite elements shape functions with compact support and equal degree of interpolation. The velocity and pressure algebraic vectors are obtained by collecting the nodal solutions for both variables as 𝐮 = (u_1 , u_2 , …, u_dN_n)^T and 𝐩 = (p_1 , p_2 , …, p_N_n)^T, where N_n is the number of nodes, and d is the number of spatial dimensions. The QSVMS semi-discrete system can then be expressed as follows:
{[ Md 𝐮/d t + A𝐮 + C(𝐜) 𝐮 + D(𝐮,𝐩) + B^T 𝐩 = F ,; B𝐮 = E(𝐮, 𝐩) , ].
where the standard Galerkin finite element terms are given by
[ M = _^e M^e M_i j^e = ( ψ_j, ψ_i )_Ω^e the mass matrix ,; A = _^e A^e A_i j^e = ( ∇^s ψ_j, ∇^s ψ_i )_Ω^e the diffusion matrix ,; C(𝐜) = _^e C^e C_i j^e = - ( ψ_j, c^h ·∇ψ_i )_Ω^e the convection matrix ,; B = _^e B^e B_i j^e = - ( ψ̂_j , ∇·ψ_i )_Ω^e the pressure/divergence matrix ,; F = _^e F^e F_i^e =( b, ψ_i )_Ω^e the forcing term , ]
where _^e is the finite elements assembly operator, 1≤ i,j≤ n_en, and n_en is the number of nodes per element. The remaining terms are related to the VMS stabilisation and are defined as follows:
[ D(𝐮, 𝐩) = _^e D^e D_i = ( ∇ (ψ_i) u^h, τ_M r_M (u^h, p^h) )_Ω^e; + ( (∇ψ_i)^T u^h , τ_M r_M (u^h, p^h) )_Ω^e; - ( ψ_i , τ_M r_M (u^h, p^h) ⊗τ_M r_M (u^h, p^h) )_Ω^e; + ( τ_C r_C (u^h) , ∇·ψ_i )_Ω^e ,; E(𝐮, 𝐩) = _^e E^e E_i = (
∇ψ̂_i , τ_M r_M (u^h, p^h) )_Ω^e . ]
§.§.§ Time Discretisation
We employ a generalised-α time integration scheme <cit.>. In these schemes, a set of parameters γ, β, α_m, α_f are defined and used to obtain a linear combination of the terms of the system in Eq. <ref> with respect to current and past time steps. Specifically, the Bossak scheme, which is second-order accurate and unconditionally stable, is used in this work. The application of the time integration scheme leads to the fully discrete system:
K_eff(d_n+ℓ) d_n+1 = F_eff ,
where K_eff(d_n+ℓ) ∈ℝ^𝒩×𝒩 is the fully discrete system matrix, 𝒩 is the total number of degrees of freedom, d_n+1∈ℝ^𝒩 is the finite elements solution vector at time step n+1, that is
d_n+1 =
[ 𝐮_n+1; 𝐩_n+1 ] .
The dependence of the matrix K_eff(d_n+ℓ) on the solution is due to the nonlinearity in the convective term, the subscript n+ℓ depends on the type of linearisation used. We now spend a word on the linearisation technique employed.
§.§.§ Linearisation of the Discrete System
Given a solution at a time step n, the solution corresponding to time step n+1 is written in incremental form as
d_n+1 = d_n + Δd_n+1 ,
where Δ d∈ℝ^ 𝒩 is a solution increment. We define the residual R: ℝ^𝒩×𝒫→ℝ^𝒩 (highlighting the dependence of all terms of the residual on the geometric parameter μ∈𝒫), as
R(d_n+1 ; μ) := F_eff - K_eff(d_n+ ℓ) d_n+1 ,
together with a fixed-point iteration method of the form
- J ( d_n+1^k) δd = R (d_n+1^k) ,
Δd_n+1^k+1 = Δd^k+1_n+1 + δd ,
where the Jacobian matrix J = ∂R / ∂d, and k is the current iterate index. We use a Picard method <cit.>, which amounts to choosing ℓ = 0 for the matrix K_eff(d_n+ ℓ) in Eq. <ref>. A Newton-Raphson method would amount to choosing the subindex ℓ = 1, and therefore taking the tangent in the current iterate.
§.§.§ Quantity of Interest QoI
It is not uncommon that, rather than the complete solution field d, a quantity of interest is computed through a given operator Γ on the solution field as:
z = Γ(d), z∈ℝ^α⊆ℝ^𝒩 .
Examples of this QoI are the lift and drag coefficients. For the case at hand, such a QoI will be given by the vertical component of the velocity at a selected position indicating the presence of the Coanda effect.
§.§ Reduced Order Model ROM
Projection-based reduced order models involve constructing an optimal basis that spans a subspace where a high-dimensional system can be projected and solved. Various techniques in the literature fall under this classification, and for our research, we have chosen a Garlekin proper orthogonal decomposition (POD-Galerkin) technique.
POD, like other projection-based reduced order modeling techniques, comprises two stages:
* Offline stage: In this stage, a set of simulations is performed using the computationally expensive FOM, and the resulting solutions are stored in matrix form. These matrices are then analysed to obtain the aforementioned basis. Ideally, the produced ROMs should not require the evaluation of full dimensional variables. We accomplish this decoupling through a hyper-reduction training.
* Online stage: With the basis and additional hyper-reduction data available, the hyper-reduced order models (HROMs) can be efficiently launched for unexplored parameters at a fraction of the cost associated with the FOMs.
In the subsequent sections, we will delve into the details primarily concerning the offline stage of ROMs and HROMs.
§.§.§ Proper Orthogonal Decomposition POD-Galerkin
We define the discrete solution manifold as the set of solution vectors 𝐝 for all possible values of the parameters vector, that is
ℳ^h = {d(μ)|μ∈𝒫 , t ∈ [0,T] } ⊂ ℝ^𝒩 ,
We employ a linear approximation that, given a reduced order model solution at a time step n, d̃_n ∈ℝ^𝒩, the solution corresponding to the step n+1 is
d̃_n+1 = d̃_n + ΦΔ q_n+1 ,
where Δ q∈ℝ^ N is the reduced solution increment, with N ≤𝒩 (usually N ≪𝒩 is expected) , and Φ = [ϕ_1, …, ϕ_ N ] ∈ℝ^𝒩× N, is the reduced basis matrix, obtained by employing the proper orthogonal decomposition POD method <cit.>.
The procedure consists in taking m samples (FOM solutions) of the discrete solution manifold, and store them in a snapshots matrix S = [d_1, ⋯, d_m] ∈ℝ^𝒩× m. Here, each of the m samples corresponds to a time step. For this, we consider a function
f_μ: ℝ_+ →ℝ^p ,
t ↦μ ,
defining a trajectory over the discrete solution manifold with m cases corresponding to the pairs (t, f_μ(t)). The specific trajectories used in our investigation are shown later in section <ref>.
Having at one's disposal the snapshots matrix S, we apply the truncated singular value decomposition with a truncation tolerance 0≤ϵ_ SOL≤ 1, as S = U_N Σ_NV_N^T + E where,
U_N∈ℝ^𝒩× N , Σ_N = diag(σ_1, σ_2, …, σ_N) ∈ℝ^N × N , V_N^T∈ℝ^ N × m , E≤ϵ_ SOLS .
The optimal N-dimensional -basis <cit.> is obtained as the truncated matrix of left singular vectors Φ:= U_N.
Substitution of Eq. <ref> into Eq. <ref>, and subsequent projection of the over-determined system of equations onto Φ (Galerkin projection), results in
Φ^T R(d̃_n+1; μ ) = 0 .
The fixed-point method for solving for the reduced increment Δ q∈ℝ^N is given as
- Φ^T J(d̃_n+1^k) Φδq = Φ^T R (d̃_n+1^k) ,
Δq^k+1_n+1 = Δq_n+1^k + δq .
The same operator [Although for a ROM the same operator can be employed, for an HROM not containing all elemental variables, an equivalent operator might be required] defining the QoI for the FOM, can be used for the reduced order model solution vector as
z̃ = Γ(d̃), z̃∈ℝ^α⊆ℝ^𝒩 .
§.§.§ Hyper-Reduction via Empirical Cubature
In order to reduce the cost of assembling the system in Eq. <ref> for each iteration, we employ the Empirical Cubature Method (ECM), first proposed in <cit.> and later refined in <cit.>.
Taking into account the finite elements discretisation employed, Eq. <ref> can also be represented as
Φ^T R(d̃; μ ) = ∑_e=1^N_elΦ^T L_e^T R_e (L_e d̃; μ ) = ∑_e=1^N_elΦ_e^T R_e (d̃_e; μ ) ,
where L_e ∈{0,1}^e_dof×𝒩 is the Boolean operator localising the high dimensional vector of dimension 𝒩 to the degrees of freedom associated to element e. Consequently, Φ_e and d̃_e are respectively the entries of the basis and reduced solution vector associated to element e.
One can approximate Eq. <ref> looping over the elements contained in a subset 𝔼⊂{ 1, 2, … , N_el} and multiplying every elemental contribution by a corresponding weight ω_e as
∑_e∈𝔼Φ_e^T R_e (d̃_e; μ ) ω_e = 0 .
The optimisation problem for finding 𝔼 and ω, requires to store the elemental contributions in Eq. <ref> for each of the m studied cases.
Let the projected residual for element e for parameter case i (here we write μ_i to mean t = i, μ = f_μ(i), see Eq. <ref>) be defined as
ℝ^N ∋ℛ_ie = Φ_e^T R_e (d̃_e; μ_i ) .
We construct then the matrix of projected residuals for all N_el elements and m studied parameters, as
ℝ^N · m × N_el∋S_r =
[ ℛ_11 … ℛ_1N_el; ⋮ ⋱ ⋮; ℛ_m1 … ℛ_mN_el ] .
The exact assembly of the residuals, for the m studied parameters variations, is given as
ℝ^N · m∋d = S_r 1 ,
where 1 := {1}^N_el.
Let us consider the sparse vector of reduced weights ζ∈ℝ_+^N_el, with non-zero values ω at indices 𝔼. Then, the optimisation problem to solve is
(𝔼, ω) = arg min ζ_0
s.t. d - S_rζ_2 < ϵd_2
ζ≽0 ,
where ·_0 represents the zero pseudo norm (counting the number of non-zero entries of its argument), ϵ is a user-defined tolerance, and the squiggly inequality symbol ·≽· represents inequality with respect to the non-negative orthant, i.e.
ζ≽0ζ_i ≥ 0, i=1,…, N_el .
It is well known <cit.> that the optimisation problem in Eq. <ref> is computationally intractable (NP-hard) and therefore, recourse to either suboptimal greedy heuristic or convexification is to be made. The Empirical Cubature Method <cit.> tackles the problem by first computing a basis for the matrix S_r via a truncated SVD as,
S_r = U_βΣ_βG + EE≤ϵ_ RESS_r .
We use the truncated matrix of right singular vectors G to define a vector b containing the assembly of the modes of the residual as
ℝ^β∋b = G 1 .
The optimisation problem solved in a greedy fashion by the ECM is
(𝔼, ω) = arg min b - Gζ_2
s.t. ζ≽0 .
§.§ Global Workflow
We now present the general workflow followed for running the μ-geometrically parametrised simulation. In algorithm <ref> the steps to be followed by either a FOM, ROM or HROM; are delineated. The main difference among these occurs in point <ref>.
Algorithm <ref> has been applied first to a FOM, for a given function f_μ defining a training trajectory. The resulting snapshots matrix S was used to obtain a basis Φ for a ROM following the POD-Galerkin procedure presented in Section <ref>. Then, a ROM was run following again algorithm <ref>, while storing the projected residuals as outlined in Section <ref>.
It is worth noting that this way of performing the HROM training can be expensive (in particular in terms of memory). In other works, the authors have chosen to project the readily available snapshots matrices onto the column space of the basis as 𝐝_ projected = ΦΦ^T 𝐝, to finally obtain the residuals with respect to these projected snapshots. Our decision to run the complete workflow to obtain the residuals projected is completely consistent with the theory presented in section <ref>.
Finally, using the selected HROM elements and weights (𝔼, ω), the workflow was run once more for the training trajectory. For other trajectories f_μ, the same basis Φ, together with the selected elements and weights (𝔼, ω) are to be used to cheaply obtain an approximation to the solution field and/or QoI.
§ MODEL DESCRIPTION
The plane contraction-expansion channel used in this study is shown in Fig. <ref>. This geometry serves as the base or reference configuration. To obtain a deformed configuration for a given solution step, two different types of geometric mappings were applied, each of them depending on a single scalar geometric parameter μ∈ℝ. These mappings are an affine mapping denoted as φ_ AFFINE and a nonlinear mapping that combines Free Form Deformation (FFD) and Radial Basis Functions (RBF) denoted as φ_ FFD + RBF.
The affine mapping φ_ AFFINE changes the narrowing width w_c in a symmetric way while preserving the walls straight. On the other hand, the nonlinear mapping φ_ FFD + RBF allows to obtain deformed configurations that undergo more complex transformations and lead to curved walls of the narrowing.
§.§ Affine mapping
For the affine mapping, the only parameter that dictates the shape of the geometry is the narrowing width μ = w_c ∈ [0.1, 2.9]. Following <cit.>, the affine mapping
φ_ AFFINE: Ω_0 × [0.1, 2.9] →Ω ,
(x_0 , μ ) ↦x ,
is defined by decomposing the original domain Ω_0 into N_dom non-overlapping subdomains as,
Ω_0 = ⋃_i=1^N_domΩ_0^i with , Ω_0^i ∩Ω_0^j = ∅ ∀ i≠ j ,
and defining for each subdomain
x_0 = T_i(μ) (x -g_i ) + g_i , x_0 ∈Ω_0 x∈Ω , T_i ∈ℝ^2 × 2 g_i ∈ℝ^2 .
For the geometry at hand, five distinct regions can be identified, as shown in Fig. <ref>. For each of them, a particular Jacobian allows to map the point in the original configuration to the deformed configuration, using as parameter the narrowing width w_c. Eq. <ref> shows the Jacobians corresponding to each of the coloured regions.
T_brown =
[ 1 0; 0 1 ]T_yellow =
[ 1 0; 0 2/3 - w_c ]T_green =
[ 1 0; 0 1/w_c ]T_grey =
[ 1 0; 1- w_c/2 1 ]T_blue =
[ 1 0; w_c - 1/2 1 ] .
§.§ Nonlinear Mapping
We perform the nonlinear mapping in two stages. In the first stage, a Free Form Deformation (FFD) technique is applied to the boundary points of the 2D geometry. The second stage involves moving the points in the interior of the geometry. This is accomplished using a Radial Basis Function (RBF) interpolator.
In FFD, the geometry to morph is surrounded by a box of control points, as can be seen in Fig. <ref>. Let P_0 represent the set containing the original position of the control points defining the bounding box. For our particular case, the set of deformed control points' positions P(μ) depends on a single scalar μ∈ [0,1]. The deformed position x∈∂Ω of a point x_0 ∈∂Ω_0 is then given then as,
x = FFD(x_0, P_0, P(μ)) , ∀ x_0 ∈∂Ω_0 , x∈∂Ω .
For information on the specific expression of the mappings and blending functions used, the interested reader is directed to <cit.>. We employ the implementation of FFD from the Python library PyGeM <cit.>.
After the deformed boundary points have been computed, the deformed position of the points in the interior of the geometry x∈Ω can be obtained by applying an RBF interpolator.
Let x̂_i represent the i-th undeformed boundary mesh point (whose deformed position is obtained via the FFD mapping). The deformed position x∈Ω of a point x_0 ∈Ω_0 is then given then as,
x = RBF(x_0) = ∑_i=1^N_bβ_i ϕ( x_0 - x̂_i ) + q(x_0) ,
where N_b is the number of points in the deformed boundary, ϕ(·) is a given basis function depending on the distance from the desired point in the interior to a mesh point in the boundary, and q(x_0) is a polynomial. The coefficients β_i and the polynomial q(x_0), are obtained as a function of the deformed boundary points by imposing the interpolation conditions
RBF( x̂_i) = FFD(x̂_i, P_0, P(μ)) , i=1, …, N_b .
Further details about Radial Basis Functions interpolation can be found in <cit.>. Successive application of the FFD mapping for all mesh points on the boundary of the geometry, followed by RBF parameters computation and application for all mesh points in the interior, completely defines the mapping
φ_ FFD+RBF: Ω_0 × [0,1] →Ω ,
(x_0 , μ ) ↦x .
§.§ Mesh
The two geometric mappings require different control surfaces with a corresponding label to exist. For example, the affine mapping requires to clearly identify the coloured regions in Fig. <ref> to apply the respective transformations on them. Therefore, a conforming mesh fitted to each of them was generated. On the other hand, the FFD+RBF mapping only required to identify the upper and lower rectangles defining the narrowing. For this second case, the geometry could be meshed by considering the whole geometry as a single region, while only labelling correctly the required walls. We have used unstructured meshes for both cases because they are known to work better for the specific example under consideration <cit.>.
The bijective nature of the affine mapping allows to preserve the positive Jacobians of the isoparametric finite elements under large deformations. On the other hand, using the FFD+RBF mapping can lead to distorted finite elements which intersect, or completely swap, whenever deformations are too large. The generation of the mesh to use with the FFD+RBF mapping was chosen to be fine enough to be able to observe the desired bifurcating phenomena under a relatively large deformation, while maintaining all elements with positive Jacobians for the isoparametric transformation. For comparison purposes, the same mesh size used for the FFD+RBF mapping was also employed for the affine one.
The general purpose software KratosMultiphysics <cit.> was used for launching the FOM, ROM and HROM simulations, by taking advantage of the KratosRomApplication. In Kratos, the finite elements discretisation is closely linked to the physics to be simulated. For the physics in these simulations, Navier-Stokes in an Arbitrary Lagrangian Eulerian framework, the only available option was to use P1-P1 triangular elements.
The information of the meshes used[ In KratosMultiphysics, the mesh entities are separated in 2D triangles and 1D boundary elements known in Kratos as Conditions. Given this distinction there are 4916 Elements +231 Conditions for the affine mapping, and 5152 elements + 260 Conditions for the FFD+RBF mapping] can be seen in Table <ref>, while Figures <ref>, and <ref> show the generated meshes.
In both figures, the presence of the probe point p^* allows to monitor the evolution of the QoI, and therefore the occurrence of the Coanda effect. As the geometry morphs in time following a circular trajectory, a hysteresis plot can be created. The position of the probe point is selected to be located at a node in the corresponding finite element mesh that is close to the y-direction centre line, and slightly behind the narrowing.
The generation of the meshes was performed using the pre- and post-processor GiD <cit.>.
§.§ Boundary and Initial Conditions
Both models contemplate the initial and boundary conditions shown next
u = 0 , p = 0 , x = [ x; y ]∈Ω_0 t = 0 ,
[ u =
[ u; v ] =
{[ [ y(3-y) sin( t π/2); 0 ] , x = 0 t = [0,1] ,; ; [ y(3-y); 0 ] , x = 0 t = (1,T] , ].; ; p = 0 , x = 8 t = [0,T] . ]
As can be seen, the maximum horizontal velocity at the inlet is set to 3. Moreover, the inlet velocity has been allowed to undergo an initialisation period for one second following a smooth path.
§.§ Material Properties
The values of the viscosity and density for the fluid have been selected so that, given the distortion allowed by the finite element discretisations under the respective geometric mappings, the results were qualitatively comparable to the ones reported in similar works, e.g. <cit.>. Moreover, the material properties are held constant for all simulations.
For both models, we used the values of dynamic viscosity and density reported in the following table.
§ RESULTS
In this section, we begin by providing a qualitative overview of the full order model (FOM). We present the different geometrical configurations and showcase the observed Coanda effect along with their corresponding hysteresis loops. Additionally, we offer insights into the training process of the reduced order model (ROM) and hyper-reduced order model (HROM). We discuss the testing trajectory and present the results obtained from all models. To conclude this section, we provide a quantitative comparison including error and speedup factors among the models.
The models' files will be made available via the Examples repository of KratosMultiphysics<cit.> at <https://github.com/KratosMultiphysics/Examples/tree/master/rom_application/ContractionExpansionChannel>.
§.§ FOM Results
Before delving into the study of the hysteresis of the Coanda effect for the contraction-expansion channel under scrutiny, we first examine the behavior of the full order model (FOM) and how variations in the geometry impact the fluid solution.
Fig. <ref> illustrates the flow field for extreme values of the geometric parameter μ for both the affine mapping (ranging from 0.1 to 2.9) and the FFD+RBF mapping (ranging from 0 to 1) in the contraction-expansion channel.
For the case where the narrowing width w_c is set to 1 (Figures <ref> and <ref>), both mappings yield qualitatively similar solutions, characterized by a horizontally symmetric jet. However, as the narrowing width decreases, the Coanda effect becomes present, and small eddies appear on the asymmetric jets (Figures <ref> and <ref>).
Fig. <ref> exhibits a complex flow pattern. Notice the relatively small value of the narrowing width made possible by the affine mapping thanks to its bijectivity. On the other hand, in Fig. <ref>, the maximum distortion allowed by employing the FFD+RBF mapping produces a narrowing width of w_c = 0.422. Although this value is larger than the minimum narrowing width for the affine mapping, it still produces flow patterns relevant to the current investigation.
In Fig. <ref>, we show the possible solutions obtained for the same values of the geometric parameter μ for both mappings. Advanced methods can be used for obtaining one or the other of the stable solutions, e.g. <cit.>. For our specific case, we have observed that given a specific trajectory (earlier mentioned to be defined by f_μ), that is, given an initial value of the parameter μ, and then deforming the geometry in a closed loop, one of the stable solutions is obtained consistently, and a hysteresis loop can be observed.
§.§.§ Trajectories
We refer here to trajectories as the ordered set of values of the geometric parameter that define in turn a geometric morphing from an initial geometry Ω_0, going to a maximally deformed geometry and from there back to the initial geometric configuration. For this study, two trajectories have been used. The training trajectory was used for generating the snapshots for the basis matrix Φ, as well as the hyper-reduced set of elements and weights (𝔼, ω). Moreover, a testing trajectory was used for measuring the capacity of the generated ROMs to capture different geometric transformations.
In Fig. <ref>, the training trajectories for both mappings are shown. We keep the initial geometry static for a period of time to allow the flow to develop. Then, the morphing of the trajectory progressively makes the narrowing width w_c arrive to a minimum value. For the case of the FFD+RBF mapping, we keep the minimum value of the narrowing width for enough time for the Coanda effect to develop. For the case of the affine mapping, the asymmetric jet is already strongly present in the solution by the time the maximum deformation is reached, therefore, for this mapping we immediately start the progressive deformation back to the initial configuration.
To evaluate the performance of the ROMs, we designed a specific testing trajectory for each of the considered mappings, as shown in Fig. <ref>. In contrast to the training trajectories, the initial and final states of the testing trajectory are swapped. This deliberate modification consistently triggers the opposite branch of the bifurcation compared to the training trajectory.
By employing this testing trajectory, we can assess the ability of the ROMs to accurately approximate simulation trajectories where the opposite branch of the bifurcation should be taken. It is expected that ROMs whose modes favor one of the branches in the bifurcation will face challenges in accurately capturing the behavior of the opposite branch. Therefore, this testing approach provides valuable insights into the performance and limitations of the ROMs in capturing the full range of system behavior.
§.§.§ FOM Training Hysteresis
By monitoring the velocity on the probe point p^* (See Fig. <ref> and Fig. <ref> ), we can observe the hysteresis of the velocity that this training trajectories produce, as shown in Fig <ref>.
As can be seen, for the training trajectories, the flow attaches to the lower wall in the case of the affine mapping, while a jet attaching to the upper wall is obtained when using the FFD+RBF mapping. These plots are to be approximated by the reduced order models, and for that purpose, can be considered the ground truth.
§.§ ROM Results
The POD procedure introduced in section <ref> is applied to the snapshots matrices corresponding to the training trajectories. The singular value decay of such snapshots matrices is shown in Fig. <ref>. It is observed that the decay profiles of the singular values for both cases, affine mapping and nonlinear mapping, are similar. This similarity can be attributed to the fact that the flow properties and the employed meshes are comparable in both cases.
The larger number of singular values for the affine mapping compared to the nonlinear mapping arises from the smaller time step required by the model with the affine mapping, particularly when the narrowing width w_c was at its minimum.
In this study we have explored four different truncation tolerances for the SVD of the snapshots matrices (we use a subscript SOL referring to the solution vector, in order to differentiate it from the snapshots of the residual projected present during the hyper-reduction). In particular ϵ_ SOL = { 1e-3, 1e-4, 1e-5, 1e-6 }, such that
S - ΦΦ^T S_F ≤ϵ_ SOLS_F ,
where S is the snapshots matrix and ·_F is the Frobenius norm. The leading velocity and pressure modes for the affine mapping are shown in Fig. <ref>, while the leading modes for the FFD+RBF mapping are shown in Fig. <ref>.
§.§.§ Selected Modes
The modes obtained for the affine mapping, in terms of velocity and pressure, exhibit a tendency for the asymmetric jet to bend towards the lower wall. This behavior is reflected in the dominant flow patterns captured by these modes.
On the other hand, the modes obtained for the FFD+RBF mapping exhibit a preference for the upper wall. These modes capture the flow patterns where the jet tends to attach to the upper wall of the channel.
The different modes obtained for each mapping highlight the distinct flow characteristics and behaviors induced by the geometric deformations in the training trajectories.
§.§.§ FOM vs ROM hysteresis for training trajectory
The Reduced Order Model of the training trajectory was launched taking the required amount of modes to comply with the truncation tolerance ϵ_ SOL. The results in the phase-space for the quantity of interest v_y^* are shown in Fig. <ref>.
The behavior of the Coanda effect is accurately captured when reconstructing the training trajectory using the reduced order model (ROM) for all the considered truncation tolerances. However, it is observed that the ROM solution with a truncation tolerance of ϵ_ SOL = 1e-3 deviates considerably from the solutions of the other models, in particular for the affine mapping, and less so for the FFD+RBF mapping.
This discrepancy suggests that a higher truncation tolerance leads to a loss of accuracy in reproducing the Coanda effect phenomenon. Therefore, it is important to choose an appropriate truncation tolerance to ensure reliable and accurate ROM predictions of the Coanda effect behavior.
§.§.§ FOM vs ROM hysteresis for testing trajectory for both mappings
Fig. <ref> presents the testing trajectory plots, illustrating a comparison between the full order model (FOM) solution and the reduced order model (ROM) solutions for each truncation tolerance and both mappings.
In the case of the affine mapping, none of the truncation tolerances yield a basis capable of accurately reproducing the FOM trend in the Coanda effect. The ROMs constructed with the affine mapping fail to capture the correct branch of the bifurcation.
On the other hand, for the FFD+RBF mapping, only the ROM constructed with a tolerance of ϵ_ RES=1e-3 deviates to take the "wrong" branch of the bifurcation. The remaining ROMs show an improved performance compared to the affine mapping case. Although none of the ROMs perfectly match the FOM solution, they progressively approach it.
These results highlight the challenges of accurately capturing the Coanda effect and its hysteresis behavior using reduced order models, especially when dealing with testing trajectories that require capturing the behavior of the opposite branch of the bifurcation.
§.§ HROM
In order to obtain a more efficient reduced order model, we developed a hyper-reduced order model (HROM) by projecting the residual onto the corresponding solution basis as exposed in section <ref>. The process involved taking the SVD of the matrix of projected residuals, denoted as S_r. The truncation tolerances chosen for Sr were ϵ_ RES = { 1e-3, 1e-4, 1e-5, 1e-6 }, satisfying the condition:
S_r - G^TGS_r_F ≤ϵ_ RESS_r_F .
Here, the matrix G(R, Φ) represents a basis for the row space of the projected residuals matrix. The Empirical Cubature Method (ECM) algorithm <cit.> was then applied to obtain the selected hyper-reduced elements and their corresponding positive weights:
(𝔼, ω) = (G ) .
Empirical observations have shown that the total number of selected elements is proportional to the number of modes in the POD basis
( 𝔼 ) ∝(Φ) .
Based on experience, a rule of thumb has been developed to estimate the number of selected elements as the square of the number of modes in the solution basis. For some structural mechanics problems, this claim can be shown. For other physical problems, this rule of thumb provides a useful guideline for estimating the number of selected elements in the HROM.
§.§.§ Selected HROM elements
Figures <ref> and <ref> display the finite element meshes for both the affine mapping and FFD+RBF mapping, highlighting the selected HROM elements for all the combinations of the studied tolerances ϵ_ SOL and ϵ_ RES. It can be observed that as the tolerances become smaller, the number of selected elements increases, and they tend to accumulate in regions that match the patterns present in the corresponding modes (as shown in Figures <ref> and <ref>). Specifically, the selected elements concentrate in the narrowing area and either the lower or upper walls, depending on which branch of the bifurcation was preferred during the corresponding training trajectories.
§.§.§ Hysteresis ROM vs HROM training trajectory
The hyper-reduced order models (HROMs) were utilised to reconstruct the training trajectories for both geometries. Fig. <ref> and <ref> depict the phase space reconstruction of the velocity at the probe points. In these plots, the ground truth is represented by the corresponding ROM solution, and the HROM solution serves as an approximation to the ROM. Any attempts to improve the HROM should result in convergence towards the ROM solution.
In the case of the affine mapping shown in Fig. <ref>, the simulations using the HROM constructed with a truncation tolerance of ϵ_ RES = 1e-3 are not presented. These simulations consistently became unstable, yielding no useful data. Therefore, the largest truncation tolerance for the residual that is considered for the affine mapping is ϵ_ RES = 1e-4. However, it can still be observed that the HROM simulation with this tolerance is deficient, as it exhibits visible differences from the ROM solution and still presents instabilities (see Figures <ref>, <ref>, <ref>).
On the other hand, the phase space reconstruction of the training trajectory for the FFD+RBF mapping can be observed in Fig. <ref>. For this mapping, all four values of the truncation tolerance for the residual result in HROMs that produce stable solutions. However, it is noteworthy that using a larger value of ϵ_ RES (e.g., Figures <ref>, <ref>, <ref>) causes the solution to follow the opposite stable branch of the bifurcation. Only in Fig. <ref>, do all four studied values of ϵ_ RES yield HROMs that follow the same bifurcation branch as the corresponding ROM.
§.§.§ Hysteresis ROM vs HROM testing trajectory
Moving on to the reconstruction of the testing trajectory, we examine the results for both mappings. Figures <ref> and <ref> illustrate the ROM (taken as the ground truth) compared to the HROMs constructed using different truncation tolerances.
In the case of the affine mapping, where the FOM solution for the testing trajectory shows the jet attaching to the upper wall (Fig. <ref>), all of the HROMs in Fig. <ref> follow their respective ROMs, which means they take the opposite branch compared to the FOM. Furthermore, the HROMs with a tolerance of ϵ_ RES=1e-3 were unstable, similar to what was observed in the training trajectory.
In Fig. <ref>, the solutions for all combinations of truncation tolerances remain stable for the FFD+RBF mapping. However, the HROM with a truncation tolerance of ϵ_ RES=1e-3 produces solutions that deviate considerably from their corresponding ROMs. Specifically, in Fig. <ref>, the HROM with ϵ_ RES=1e-3 actually chooses the opposite branch of the bifurcation. It is important to recall that for the testing trajectory, the FOM consistently chose the lower branch (Fig. <ref>).
§.§ Error quantification
To quantitatively evaluate the results obtained for the different models, we define a percentage error operator as follows:
e : ( a, b ) ↦a - b/a· 100 .
Here, a and b can be either matrices or column vectors. In our case, we will use this operator to compare the snapshots matrices of the solution fields for the FOM, ROM, and HROM, denoted respectively as S_ FOM, S_ ROM, and S_ HROM. We also quantify the error in the reconstruction of our quantity of interest (QoI) at the probe point p^* (as shown in Figures <ref> and <ref>), denoted as v^*_y_ FOM, v^*_y_ ROM, and v^*_y_ HROM for each of the considered models.
§.§.§ Error training trajectories
Table <ref> and <ref> provide a summary of the errors for the reconstruction of the training trajectories for both, the complete solution fields and the QoI.
For the affine mapping (Table <ref>), it can be observed that using only 11 modes for the ROM leads to a reconstruction error of 6% for the solution field. Increasing the number of modes does not significantly reduce the error, with the lowest error achieved at 1.8% already with 23 modes. In terms of the QoI error, it reaches its minimum value with 43 modes and increases slightly with 70 modes.
Regarding hyper-reduction, it is evident from the analysis in the previous sections and can be observed quantitatively here that using a tolerance of 1e-3 for the HROM selection algorithm results in unstable solutions. The minimum tolerance for the HROM element selection algorithm ϵ_ RES appears to be 1e-4, and using 1e-5 yields better results.
The error results for the nonlinear geometric mapping are shown in (Table <ref>). In this case, the minimum number of modes is 19 and the maximum is 110. As in the case of the affine mapping, here the ROM employing 110 modes exhibits a slightly higher error in both the complete solution field and the QoI compared to the ROMs with 67 or 38 modes. Similar to the affine mapping, the tolerance of ϵ_ RES = 1e-3 for the HROM element selection algorithm leads to comparatively large errors. However, in contrast to the affine mapping, all HROMs constructed with this tolerance remain stable.
In terms of the reconstruction of the complete solution field, it is observed that a minimum truncation tolerance of ϵ_ RES = 1e-4 yields satisfactory results, with maximum errors ranging from 9.47% (for ϵ_ SOL = 1e-5, ϵ_ RES = 1e-4) to a minimum of 0.67% (for ϵ_ SOL = 1e-6, ϵ_ RES = 1e-4). However, when assessing the reconstruction of the QoI, it might be advisable to adopt a minimum tolerance of ϵ_ RES = 1e-5 for HROM element selection, as it ensures errors below 1% in all cases studied.
§.§.§ Error testing trajectories
The errors for the reconstruction of the testing trajectories, including the complete solution fields and the QoI, are summarized in Tables <ref> and <ref>.
In Table <ref>, it is observed that the ROM using only 11 modes yields a 10% error in the reconstruction of the solution field compared to the FOM. As the number of modes increases, the error decreases, reaching a minimum of 8.99% with 70 modes. However, the errors in the QoI between the FOM and ROM remain above 145%. This is primarily due to the dominance of the opposite branch in the bifurcation diagram in the training trajectory, which differs from the branch induced by the FOM in this testing trajectory. The conclusions drawn from the training trajectory results regarding the tolerance parameter ϵ_ RES for the HROM algorithm are equally applicable to the testing trajectory. A tolerance of ϵ_ RES = 1e-4 yields potentially acceptable error values for the reconstruction of the solution field in the HROM compared to the corresponding ROM (except for the combination ϵ_ SOL = 1e-3, ϵ_ RES = 1e-4). However, for the reconstruction of the QoI, it is safer to adhere to tolerances of ϵ_ RES = 1e-5 for the HROM selection algorithm.
Table <ref> presents the error in the reconstruction of the FOM solution field using a ROM, ranging from 11% with 19 modes to 4.4% with 110 modes. Moreover, for the same number of modes, the error in the QoI goes from 270% to 16.37%. This expected behavior can be attributed to the ROM's ability to select the same branch in the bifurcation as the FOM for the testing trajectory, despite the training trajectory following the opposite branch.
Regarding the tolerance parameter ϵ_ RES for the HROM element selection algorithm, it can be stated that it delivers acceptable errors in terms of the reconstruction of the solution field. However, these errors can still be significant when considering the QoI. For instance, the combination ϵ_ SOL = 1e-5, ϵ_ RES = 1e-4 results in an error of 11.3% for the ROM vs HROM comparison in the QoI.
§.§ Speedup
In order to quantify the speedup factor, a comparison was conducted between the ROM and the HROM in relation to the FOM. This analysis considered the time required for the construction and solution of the linear system of equations. The time necessary for visualising the complete solution field was not taken into account.
To evaluate the speedup factor, we introduced a speedup operator denoted as s, which operates as follows:
s : T_ a ↦ T_ FOM / T_ a ,
where T_ a represents the time required for either the ROM or the HROM, while T_ FOM corresponds to the time taken by the FOM. The output of this operator indicates the number of times the reduced order model is faster in comparison to the FOM.
The subsequent tables provide an overview of the speed-up factors obtained for both geometrical mappings, and all of the examined values of basis truncation ϵ_ SOL and element selection truncation ϵ_ RES.
Based on the tabulated data, the speedup factors observed for the affine mapping surpass those of the FFD+RBF mapping. This discrepancy can be attributed to the fact that, at the point of maximum domain deformation, the model using the affine mapping necessitates a higher number of iterations by the solver, resulting in larger time savings.
When considering an admissible error threshold, employing the affine mapping with ϵ_ SOL = 1e-4 and ϵ_ RES = 1e-4 introduces a perceptual error (e (S FOM, S ROM)) of approximately 2% in the training trajectory, while achieving a significant speedup of 50 times. However, this same error measure for the testing trajectory rises as high as 10%. By adopting more stringent tolerance values, such as ϵ_ SOL = 1e-5 and ϵ_ RES = 1e-5, the speedup is reduced to nearly 10 times, but the errors do not decrease substantially.
Conversely, in the case of the FFD+RBF mapping, utilising tolerances of ϵ_ SOL = 1e-4 and ϵ_ RES = 1e-4 yields speedup factors of 11 times, accompanied by errors e (S FOM, S ROM) of approximately 2% for the training trajectory and 10% for the testing trajectory.
Finally, we must report that we detected a significant increase in assembly time in our implementation as the number of POD modes increased. In scenarios where a large number of modes are involved, we deviated from the element-by-element approach outlined in Section <ref> for assembling the system of equations. Instead, we adopted a “global" approach. This entailed assembling the sparse system matrix in a manner similar to the Full Order Model, followed by sparse-dense, and posterior dense-dense matrix product with the complete basis matrix Φ. We observed that the element-by-element formulations consistently outperformed the global approach for cases with truncation tolerances of ϵ_ SOL≥ 1e-5 and ϵ_ RES≥ 1e-5. The speedup factors reported in Tables <ref> and <ref> show the superior formulation between the two options.
§ CONCLUSIONS AND PERSPECTIVES
In this paper, our focus was on investigating a general ROM framework for addressing time-dependent fluid dynamics problems with geometric parametrisations. This framework encompasses the utilisation of two powerful techniques: Proper Orthogonal Decomposition (POD) and Empirical Cubature Method (ECM) hyperreduction. By employing these techniques, we aimed to effectively capture the intricate fluid behavior inherent in the contraction-expansion channel geometry. While this geometry offers a relatively straightforward setting, it still presents complex fluid dynamics phenomena, such as a bifurcating solution known as Coanda effect.
By utilising ROMs and hyper-reduced order models (HROMs), we have successfully constructed accurate models capable of capturing both the training trajectories, which represent a specific deformation of the geometry over time, and the testing trajectories, which not only introduce different deformations of the domain, but also trigger the opposite branch of the bifurcation, compared to the training trajectory.
We have analysed the solution behavior in a phase-space, specifically focusing on a Quantity of Interest (QoI), which is the velocity in the y direction at a probe point. This QoI allows for the detection and characterisation of phenomena such as the Coanda effect and its hysteresis. By qualitatively assessing the outputs of the ROMs and HROMs in this phase-space plot, we gain insights into the performance of the models. Additionally, quantitative evaluations have been conducted to assess the accuracy of the complete solution field and the QoI.
As discussed in Sections <ref> and <ref>, there is a trade-off between accuracy and computational speedup. The HROM models exhibit significant speedups, reaching 154× for the affine mapping and 57× for the nonlinear mapping, while still providing physically acceptable and bounded solutions. However, these models incur relatively large errors in reproducing the complete solution field and the QoI of the full order model, particularly for testing trajectories. Despite these errors, these models can still be useful in applications where the detection of the Coanda effect is crucial, even if the selected bifurcation branch is incorrect. For more accurate results, HROMs offering 50× and 11× speedups while maintaining low errors can be employed.
§.§ Future work
There are several promising avenues for further advancing this research.
Firstly, it is important to acknowledge that our study focused on a single parameter variation within a mesh comprising a relatively small number of elements, which was suitable for our academic objectives. However, in scenarios where higher resolution and increased accuracy are required, it becomes imperative to employ a larger number of elements in the base model. Additionally, an effective reduced order model should be trained by exploring a multidimensional parameter space. Addressing these considerations necessitates addressing certain challenges within the framework presented in this paper. Specifically, the launch and analysis of simulations, as well as the management of the generated matrices via singular value decomposition, become computationally demanding on a single machine.
As mentioned in Section <ref>, the size of the snapshots matrices increases with the number of elements and the number of POD modes. To alleviate this challenge, we are actively engaged in the development of parallelisation techniques for the entire workflow. This includes parallel simulation orchestration, efficient data management strategies, and the implementation of parallel algorithms for computing the singular value decomposition. These parallelisation efforts aim to significantly enhance the computational efficiency and scalability of the training process, enabling the exploration of larger parameter spaces and higher fidelity models.
Furthermore, simulations involving qualitatively distinct solutions, such as the ones demonstrated in this paper, often require a large number of modes to capture the intricate behavior of the solutions. To address this challenge, we are exploring alternative strategies. One approach involves utilising multiple piece-wise linear bases that effectively capture the specific behavior in the vicinity of a particular region in the parametric space, as demonstrated in <cit.>. Additionally, we are investigating the utilisation of nonlinear manifolds, particularly quadratic approximations as proposed in <cit.>, to mitigate the requirement of a high number of modes. Lastly, we are actively exploring the application of autoencoder neural networks as a form of generic manifold Galerkin approximation <cit.>. We anticipate that the results of these advancements will be reported in subsequent papers, expanding upon the findings presented in this study.
§ ACKNOWLEDGEMENTS
The authors acknowledge financial support from
the Spanish Ministry of Economy and Competitiveness,
through the “Severo Ochoa Programme for Centres of
Excellence in R&D” (CEX2018-000797-S)”.
This project has received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 955558. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Spain, Germany, France, Italy, Poland, Switzerland, Norway.
This publication is part of the R&D project PCI2021-121944, financed by MCIN/AEI/10.13039/501100011033 and by
the “European Union NextGenerationEU/PRTR”.
J.R. Bravo acknowledges the Departament de Recerca i Universitats de la Generalitat de Catalunya for the financial support through the FI-SDUR 2020 scholarship.
J.A Hernández also thanks the support of "MCIN/AEI/10.13039/501100011033/y por FEDER una manera de hacer Europa (PID2021-122518OB-I00)"
abbrv
|
http://arxiv.org/abs/2307.03978v2 | 20230708135448 | Separable MV-algebras and lattice-groups | [
"Vincenzo Marra",
"Matías Menni"
] | math.RA | [
"math.RA",
"math.AG",
"math.CT",
"math.LO",
"Primary: 06D35, Secondary: 06F20, 18B50, 12F10"
] |
General theory determines the notion of separable MV-algebra (equivalently, of separable unital lattice-ordered Abelian group). We establish the following structure theorem: An MV-algebra is separable if, and only if, it is a finite product of algebras of rational numbers—i.e., of subalgebras of the MV-algebra [0,1]∩. Beyond its intrinsic algebraic interest, this research is motivated by the long-term programme of developing the algebraic geometry of the opposite of the category of MV-algebras, in analogy with the classical case of commutative K-algebras over a field K.
[
Sahil Gangurde
ABV-Indian Institute of Information Technology & Management, Gwalior, India
[email protected]
===========================================================================================================================
§ INTRODUCTION
For any field K, a (commutative) K-algebra is separable if, and only if, it is a finite product of finite separable field extensions of K. See, for example, <cit.>. The aim of the present paper is to establish the analogue of this fact for MV-algebras and lattice-groups. We show as our main result that an MV-algebra is separable exactly when it is a finite product of algebras of rational numbers—the subalgebras of [0,1]∩ (Theorem <ref>). By a well-known theorem of Mundici <cit.>, the category of MV-algebras is equivalent to the category of lattice-ordered Abelian groups with a unit. We frame our treatment in the language of MV-algebras, and postpone to the final Appendix <ref> a synopsis of its translation to lattice-groups.
While the main result of this paper holds independent algebraic interest, it finds its deeper motivation in a broader mathematical landscape on which we offer some comments in this introduction.
As explained in <cit.>, some of Grothendieck’s algebro-geometric constructions may be abstracted to the context of extensive categories <cit.>.
A category with finite coproducts is extensive if the canonical functor
/X ×/Y →/(X + Y)
is an equivalence for every pair of objects X, Y in .
Extensivity attempts to make explicit a most basic property of (finite) coproducts in categories `of spaces'. For instance, the category of topological spaces and continuous functions between them is extensive; the category of groups is not.
Extensive experience indeed confirms that conceiving an extensive category as a category `of spaces' is a useful conceptual guide. Essential to the development of Algebraic Geometry is the fact that , the opposite of the category of (commutative unital) rings, is extensive.
(It easily follows that, for any ring R, the opposite of the category R/ of R-algebras is extensive.)
Extensivity naturally determines a notion of complemented subobject.
So, in an extensive category with finite products, it is also natural to consider the objects with complemented diagonal. These are traditionally called decidable objects, and it is useful to think of them as the `discrete spaces' inside the category `of spaces' where they live. For instance, a topological space is decidable if, and only if, it is discrete. For any ring R, and any R-algebra A, let A be the corresponding object in the extensive category (R/). Then A is decidable if, and only if, A is separable as an R-algebra. In other words, the separable R-algebras are precisely those for which the associated affine scheme is decidable.
Let us say that a category is coextensive if its opposite is extensive. In light of the above comments, an object in a coextensive category is called separable if the corresponding object in is decidable.
The category of MV-algebras is coextensive.
This provides the notion of separable MV-algebra that is the topic of the present paper. Explicitly, the MV-algebra A is separable if, and only if, there is a homomorphism f A + A → A such that the span
A [l]_-∇ A + A [r]^-f A
is a product diagram, where ∇ A+A→ A denotes the codiagonal map.
The geometry of has long been the subject of intensive hands-on study because of its striking connections with several areas of classical mathematics, from piecewise-linear topology to the geometry of numbers.
The characterisation of decidable objects in that we present here was motivated by our ongoing long-term project to study of the `gros Zariski' topos determined by the theory of MV-algebras as the domain of a pre-cohesive geometric morphism <cit.>. We postpone the topos-theoretic consequences of separability to further publications; no Topos Theory is required for the proof of the purely algebraic results in the present paper.
The plan of the paper is as follows. In Sections <ref>, <ref>, and <ref> we introduce the necessary material to prove a sufficient condition for an extensive category with finite products to have the property that every decidable object is a finite coproduct of connected subterminals.
In Section <ref> we verify that is coextensive.
In Theorem <ref> we characterise the subterminal objects of as, in , the subalgebras of [0,1]∩.
In order to extend Theorem <ref> to a characterisation of separable MV-algebras we need to introduce the Pierce functor for , an analogue of the standard ring-theoretic functor by the same name.
The key fact is that the Pierce functor preserves coproducts. To prove it, in Section <ref> we develop the required material on the connected-component functor π_0 in . Using the theory of spectra of MV-algebras recalled in Section <ref> along with the topological π_0 functor, we are able to show in Theorem <ref> that the Pierce functor does preserve all coproducts. Theorems <ref> and <ref> are combined in Section <ref> to obtain our main result, the mentioned characterisation of separable MV-algebras. We conclude Section <ref> with a discussion that points to further research aimed at enriching the connected-component functor on to an `arithmetic connected-component functor'; this functor, we submit, arises out of locally finite MV-algebras. Finally, in Appendix <ref> we collect the translation of our main results to lattice-groups.
§ EXTENSIVE CATEGORIES AND CONNECTED OBJECTS
In this section we recall the definition of extensive category and of connected object.
For more details about extensive categories see, for example, <cit.> and references therein.
A category with finite coproducts is called extensive if for every X and Y in the canonical functor /X ×/Y →/(X + Y)
is an equivalence.
Examples of extensive categories are (sets and functions), (finite sets and functions), any topos, , (compact Hausdorff spaces and continuous maps), (Stone[By a Stone space we mean a compact Hausdorff zero-dimensional space. Such spaces are often called Boolean in the literature.] spaces and continuous maps). The categories of rings, of Boolean algebras and of distributive lattices[Throughout the paper, with the exception of Appendix <ref>, we assume distributive lattices to have top and bottom elements preserved by homomorphisms.] are coextensive.
See <cit.> and <cit.> for further examples.
In extensive categories coproduct injections are regular monomorphisms,
coproducts of monomorphisms are monomorphisms, and
the initial object is strict in the sense that any map X → 0 is an isomorphism. Also, extensive categories are closed under slicing.
A coproduct in_0 X → X + Y ← Y :in_1 is
* disjoint if the coproduct injections are monic and the commutative square
0 [d] [r] Y [d]^-in_1
X [r]_-in_0 X + Y
is a pullback;
* universal if for every arrow Z → X + Y the two pullback squares below exist
V [d] [r] Z [d] [l]W[d]
X [r]_-in_0 X + Y [l]^-in_1 Y
and the top cospan is a coproduct diagram.
The following result is essentially <cit.>.
A category with finite coproducts is extensive if, and only if,
coproducts are universal and disjoint.
Assume from now on that is an extensive category.
A monomorphism u U → X in is called complemented if there is a v V → X such that the cospan
u U → X ← V :v is a coproduct diagram. In this case, v is the complement of u. Notice that complemented monomorphisms are regular monomorphisms because they are coproduct injections.
In the next definition, and throughout, we identify monomorphisms and subobjects whenever convenient.
An object X in is connected if it has exactly two complemented subobjects.
In or , an object is connected if and only if it has exactly two clopens.
An object A in is connected as an object in if and only if A has exactly two idempotents.
We remark that, in general, connected objects are not closed under finite products.
For each X in we let X denote the poset of complemented subobjects of X.
We stress that if u U → X and v V → X are two complemented monomorphisms in and f U → V is such that v f = u then f is complemented <cit.>. So for any two complemented subobjects u, v of X, there is no ambiguity in writing u ≤ v since it means the same for u, v considered as subobjects, or as complemented subobjects.
Extensivity easily implies that the poset X has finite infima, a bottom element, and an involution.
This structure may be used to prove that X is actually a Boolean algebra which interacts well with pullbacks in the sense that, for any map f X → Y in , pulling back along f determines a Boolean algebra homomorphism Y → X.
So, assuming that is well-powered, the assignment X ↦ X extends to a functor → between extensive categories that preserves finite coproducts.
We will use the following simple equivalences.
For any object X in the following are equivalent.
* X is connected.
* X is not initial and, for every complemented subobject u U → X, U is initial or u is an isomorphism.
* X is not initial and, for every coproduct diagram U → X ← V, U is initial or V is initial.
§ FINITE-COPRODUCT PRESERVING FUNCTORS
Let and be extensive categories, and let L → preserve finite coproducts. Such a functor preserves complemented monomorphisms so, for any X in , L induces a function X →(L X) which is actually a map in , natural in X. (It is relevant to remark such a functor also preserves pullbacks along coproduct injections. See <cit.>.)
We will say that L is injective surjective/bijective on complemented subobjects if and only if X →(L X) has the corresponding property for every X in .
The functor L → is injective on complemented subobjects if and only if it reflects 0. In this case, L also reflects connected objects.
Assume first that L is injective on complemented subobjects and let X in be such that L X = 0.
Then (L X) is the terminal Boolean algebra and, as X →(L X) is injective by hypothesis, X is also trivial.
For the converse notice that if L reflects 0 then the map X →(L X) in has trivial kernel for every X in .
To prove the second part of the statement assume that X in is such that L X is connected in .
If X were initial then so would L X because L preserves finite coproducts and, in particular, the initial object. So X is not initial.
Now assume that U → X ← V is a coproduct diagram.
Then so is L U → L X ← L V. Since L X is connected, either L U or L V is initial by Lemma <ref>.
As L reflects 0, either U or V is initial, so X is connected by the same lemma. (Alternatively, if X →(L X) is injective and its codomain is the initial Boolean algebra then so is the domain.)
We will be particularly interested in extensive categories wherein every object is a finite coproduct of connected objects.
For example, satisfies this property, but neither nor does.
If is the category of finitely presentable K-algebras for a field K, then also satisfies this property.
If L → is bijective on complemented subobjects then the following hold.
* The functor L preserves connected objects.
* For any object X in , if L X is a finite coproduct of connected objects then so is X.
* If every object in is a finite coproduct of connected objects then so is the case in .
* Assume that and have finite products and that L preserves them. If is such that finite products of connected objects are connected then so is the case in .
To prove the first item just notice that, by hypothesis, X →(L X) is an isomorphism for each X in . Hence if X has exactly two complemented subobjects then so does L X.
Before proving the second item we establish an auxiliary fact. Let X be in and let u U → L X be a complemented subobject in with connected U.
Then, as L is surjective on complemented objects by hypothesis, there exists a complemented subobject v V → X in such that L v = u as subobjects of L X. Then L V ≅ U is connected, so V is connected by Lemma <ref>.
Thus, we have lifted the `connected component' u of L X to one of X.
To prove the second item let (u_i | i ∈ I) be a finite family of pairwise-disjoint complemented subobjects of L X with connected domain whose join is the whole of L X.
For each i∈ I, let v_i be the complemented subobject of X induced by u_i as in the previous paragraph.
As L reflects 0, the family (v_i | i∈ I) is pairwise disjoint.
Also, L ⋁_i∈ I v_i = ⋁_i ∈ I L v_i = ⋁_i∈ I u_i is the whole of LX.
As L is injective on complemented subobjects, ⋁_i∈ I v_i must be the whole of X.
In summary, we have lifted the finite coproduct decomposition of L X to one of X.
The third item follows at once from the second.
For the fourth item, let X be the product of a finite family (X_i | i ∈ I) of connected objects in .
Then L X is the product of (L X_i | i ∈ I) because L preserves finite products.
Each L X_i is connected because L preserves connected objects by the first item, so L X is connected by our hypothesis on .
Hence X is connected by Lemma <ref>.
We next prove a sufficient condition for a functor L as above to be bijective on complemented subobjects.
If L → has a finite-coproduct preserving right adjoint, then L is bijective on complemented subobjects.
Let R be the right adjoint to L and let σ and τ be the unit and counit of L ⊣ R.
We show that L is both injective and surjective on complemented subobjects.
To prove injectivity it is enough to show that L reflects 0 (Lemma <ref>).
So let X be an object in such that L X is initial.
Then we may transpose the isomorphism L X → 0 in to a map X → R 0, but R 0 = 0 because R is assumed to preserve finite coproducts.
Since the initial object is strict, X is initial.
We next show that L is surjective on complemented subobjects.
Let u U → L X be a complemented monomorphism.
Then R u is complemented so the left pullback square below exists
V [d]_-v [r] R U [d]^-R u L V [d]_-L v[r] L(R U) [d]^-L(R u)[r]^-τ U [d]^-u
X [r]_-σ R (L X) L X [r]_-Lσ L(R (L X)) [r]_-τ L X
by extensivity of . Then the two squares on the right above obviously commute, and the bottom composite is the identity. Moreover, <cit.> implies that both squares are pullbacks, so u and L v coincide as subobjects of LX.
Combining Lemma <ref> and Proposition <ref> we obtain the following.
Assume that L → has a finite-coproduct preserving right adjoint. If every object in is a finite coproduct of connected objects then so is the case in .
§ DECIDABLE OBJECTS
Let be an extensive category with finite products.
In particular, has a terminal object 1.
An object X is called subterminal if the unique map X → 1 is monic.
For any object X in , the following are equivalent.
* The object X is subterminal.
* The diagonal Δ X → X× X is an isomorphism.
* The projections _0, _1 X× X → X are equal.
The first item implies the second because for any monomorphism X → 1 the following diagram
X [d]_-id[r]^-id X [d]^-!
X [r]_-! 1
is a pullback.
The second item implies the third because any map has at most one inverse.
To prove that the third item implies the first, let f, g Y → X. Then there exists a unique map fg Y → X × X such that _0 fg = f and _1 fg = g.
So f = _0 fg = _1 fg = g.
That is, for any object Y there is a unique map Y → X.
This means that the unique map X → 1 is monic.
We stress that extensivity plays no rôle in Lemma <ref>, which is a general fact about categories with finite products.
An object X in is decidable if the diagonal Δ X → X × X is complemented.
Lemma <ref> shows that subterminal objects in are decidable, and that they may be characterised as those decidable objects X such that the diagonal Δ X → X × X not only is complemented, but is actually an isomorphism.
The full subcategory of decidable objects will be denoted by →.
If is lextensive (i.e. extensive and with finite limits) it follows from <cit.> that is lextensive and that the inclusion → preserves finite limits, finite coproducts and that it is closed under subobjects. Moreover, for any X, Y in , X + Y is decidable if, and only if, both X and Y are decidable.
On the other hand, arbitrary coproducts of decidable objects need not be decidable—consider, for instance, an infinite copower of the terminal object in or .
For any object X in the following are equivalent:
* X is subterminal and connected.
* X is decidable and X × X is connected.
If X is subterminal and connected then Δ X → X× X is an isomorphism by Lemma <ref>.
So X is decidable and X× X is as connected as X.
For the converse assume that X is decidable and that X × X is connected.
Decidability means that the subobject Δ X → X × X is complemented; as X × X is connected, X is initial or Δ X → X × X is an isomorphism by Lemma <ref>. But X is not initial (because X× X is connected) so Δ X → X × X is an isomorphism. Then X is as connected as X× X, and X is subterminal by Lemma <ref>.
Let be another extensive category with finite products and let L → preserve finite products and finite coproducts.
Assume that L reflects 0 and that
1 is connected in . Then the following hold for every X in .
* If L X = 1 then X is connected.
* If X in is decidable and L X = 1 then X is subterminal.
The functor L reflects 0 so it reflects connected objects by Lemma <ref>.
As 1 is connected in by hypothesis, L X = 1 implies X connected.
If L X = 1 then L (X × X) = L X × L X = 1.
So X × X is connected by the first item.
Therefore X is subterminal by Proposition <ref>.
It easily follows from the definition of decidable object that L preserves decidable objects. In more detail, the preservation properties of L imply that the left-bottom composite below
[d] @.>[r] [d]
[r]_-L
factors uniquely through the right inclusion and, moreover, → preserves finite products and finite coproducts.
In fact, → preserves all the finite limits that L preserves (because the subcategories of decidable objects are closed under finite limits).
Additionally assume from now on that L → has a finite-coproduct preserving right adjoint R →.
Notice that under the present hypotheses both L and R preserve finite products and finite coproducts.
It follows that the adjunction L⊣ R restricts to one between and .
If every decidable object in is a finite coproduct of connected objects then so is the case in .
The adjunction L ⊣ R → restricts to one L' ⊣ R' →,
and every object in is a finite coproduct of connected objects by hypothesis.
So we may apply Corollary <ref> to L'→
Because is lextensive, there exists an essentially unique coproduct preserving functor → that also preserves the terminal object.
The functor sends a finite set I to the copower I· 1 in .
The categories , , and other examples have the property that this functor → coincides with →. Notice that if this condition holds then 1 is connected in , because = → is closed under subobjects and preserves 1.
If the canonical functor → coincides with → then every decidable object in is a finite coproduct of connected subterminals.
By Corollary <ref> every decidable object in is a finite coproduct of connected objects. So it is enough to prove that every connected decidable object in is subterminal. For this, let X be connected and decidable.
Then L X is decidable, because L preserves finite products and finite coproducts, and it is connected by Lemma <ref> and Proposition <ref>.
By hypothesis, the canonical → coincides with → so L X = 1.
Hence X is decidable and L X = 1. Therefore X is subterminal by Lemma <ref>.
For a lextensive category we have considered several conditions.
* Every decidable object is a finite coproduct of connected objects.
* Every decidable object is a finite coproduct of connected subterminals.
* The canonical functor → coincides with the inclusion →.
For a field K, (K/) satisfies the first condition but not the second. The categories and satisfy the third condition.
The third condition implies the second which, in turn, implies the first.
Proposition <ref> shows that for certain adjunctions L ⊣ R →, if satisfies the third condition then satisfies the second. This will be used to prove that satisfies the second condition (Theorem <ref>).
§ THE COEXTENSIVE CATEGORY OF MV-ALGEBRAS
For background on MV-algebras we refer to the standard textbooks <cit.>, of which we also follow the notation.
In this section we show that is coextensive by proving that products are codisjoint and couniversal (Proposition <ref>).
Let be a regular category with finite colimits.
If 0 → 1 is a regular epimorphism then products are codisjoint.
Let A be an object in .
As the composite 0 → A → 1 is a regular epimorphism by hypothesis, so is A → 1 by regularity of .
That is, not only 0 → 1 but actually any A → 1 is a regular epimorphism.
As every regular epimorphism is the coequalizer of its kernel pair, A → 1 is the coequalizer of the two projections A × A → A.
Also, as products of regular epimorphisms are epimorphisms, the product of id A → A and B → 1 is a regular epimorphism A × B → A × 1. That is, the projection A × B → A is a regular epimorphism.
To complete the proof we recall a basic fact about colimits:
for a commutative diagram as on the left below
E [d]_-e [r]<+1ex>^-e_0[r]<-1ex>_-e_1 D [d]_-d [r] B [d]
(A× A) × B [d]_-_0[rr]<+1ex>^-_0 × B[rr]<-1ex>_-_1 × B A× B [d]_-_0[r]^-_1 B [d]
F [r]<+1ex>^-f_0[r]<-1ex>_-f_1 A [r] Q
A× A [rr]<+1ex>^-_0[rr]<-1ex>_-_1 A [r] 1
such that d e_i = f_i e for i ∈{0, 1}, the top and bottom forks are coequalizers and e is epic, the inner right square is a pushout. Applying this observation to the diagram on the right above we obtain that the inner right square in that diagram is a pushout.
In particular, if is the category of models for an algebraic theory with at least one constant then the initial object 0 is non-empty and so 0 → 1 is a regular epimorphism. This is the case, of course, for =.
In , couniversality of products is entailed by the intimate relationship between idempotents and product decompositions. The situation for is analogous. An element b of an MV-algebra A is called Boolean if it satisfies one of the following equivalent conditions (see <cit.>):
b⊕ b=b
b⊙ b=b
b∨ b=1
b∧ b=0.
For x∈ A we let A → A[x^-1] be the quotient map induced by the congruence on A generated by the pair (x,1).
For any f A → B in the following diagram is a pushout
A [d]_-f[r] A[x^-1] [d]
B [r] B[(f x)^-1]
where the right vertical map is the unique one making the square commute.
Standard, using the universal property of the (horizontal) quotient homomorphisms.
For any MV-algebra A and every Boolean element x∈ A, let ⟨ x ⟩ be the ideal of A generated by { x}. Then the quotient q A→ A/⟨ x⟩ has the universal property of A → A[x^-1].
If k A → B is such that k x = 1 then x ∈k, so ⟨ x⟩k. By the universal property of quotients there is exactly one homomorphism c A/⟨ x⟩→ C such that cq=k.
In , the diagram
D [l]^-q_0 C [r]_-q_1 E
is a product precisely when there exists a Boolean element x∈ C such that q_0 has the universal property of C → C[( x)^-1] and q_1 has the universal property of C → C[x^-1].
When this is the case, the element x∈ C with the foregoing property is unique.
Assume the diagram is a product. Then there is a unique x∈ C such that q_ix=i, i=0,1. This x is Boolean because 0 and 1 are. Hence x is Boolean too, and thus ⊕-idempotent; therefore, ⟨ x ⟩={c∈ C| c ≤ x}. If c≤ x then q_1c≤ q_1( x)=0, so q_1c=0 and c∈q_1. If c∈q_1 then q_1c=0≤ q_1( x) and q_0c≤ 1=q_0( x), so c≤ x by the definition of product order. We conclude q_1=⟨ x ⟩. The projection q_1 is surjective so Lemma <ref> entails that q_1 has the universal property of C → C[x^-1].
An entirely similar argument applies to q_0.
Conversely, assume q_0 and q_1 have the universal properties in the statement.
By Lemma <ref> we may identify q_0 with C → C/⟨ x⟩ and q_1 with C → C/⟨ x⟩. So it is enough to show that the canonical C → C/⟨ x⟩× C/⟨ x⟩ is bijective.
Injectivity follows because if c≤ x, x then c≤ x∧ x=0, so ⟨ x⟩∩⟨ x⟩ = 0.
To prove surjectivity, let (q_0 c_0 , q_1 c_1) ∈ C/⟨ x⟩× C/⟨ x⟩ with c_0, c_1 ∈ C and consider
c = (c_0 ∧ x) ∨ (c_1 ∧ x) ∈ C. It is easy to check that C → C/⟨ x⟩× C/⟨ x⟩ sends c in the domain to (q_0 c_0 , q_1 c_1) in the codomain.
The content of Lemma <ref> is far from new, cf. e.g. <cit.> and <cit.>. However, having expressed that content in the form that is most suitable for the sequel, we have included a proof for the reader's convenience.
is coextensive.
Any algebraic category is complete and cocomplete, so in particular it has finite products and pushouts.
We appeal to the characterization of extensive categories in Proposition <ref>.
Codisjointness of products follows from Lemma <ref> or from a direct calculation observing that the projections of a product A × B send (0, 1) to 0 and 1 respectively, so 0 = 1 must hold in the pushout.
It remains to show that products are couniversal.
So we consider the pushout of a product diagram as below
A [d]_-h [l]_- pr_0 A× B [d]^-f[r]^- pr_1 B [d]^-k
D [l]^-q_0 C [r]_-q_1 E
and prove that the bottom span is product diagram.
Indeed, observe that the Boolean element (0, 1) ∈ A× B is sent to the Boolean element xf(1, 0) ∈ C so, by Lemma <ref>, it is enough to check that q_0 inverts x and q_1 inverts x;
but this follows from Lemma <ref>.
Although it was not necessary to prove the main result of this section, it seems worthwhile to observe that, in the context of algebraic categories, Lemma <ref> may be strengthened to a characterisation.
In any algebraic category, binary products are codisjoint if, and only if, the initial algebra has non-empty underlying set.
If the initial algebra 0 is not empty then the unique map 0 → 1 is a regular epimorphism so we can apply
Lemma <ref>.
For the converse implication notice that the following square
0 × 0 [d] [r] 0 [d]
0[r] 1
is a pushout by hypothesis. As any of the projections 0× 0 → 0 is split epic, its pushout 0 → 1 is a regular epimorphism, so 0 must be non-empty.
§ SUBTERMINALS IN , AND RATIONAL ALGEBRAS
The aim of this section is to characterize subterminal objects in .
Perhaps unexpectedly, the following fact will play an important rôle.
Monomorphisms in are stable under pushout.
It is well known <cit.> that, in algebraic categories, stability of monomorphisms under pushout is equivalent to the conjunction of the Amalgamation Property (AP) and of the Congruence Extension Property (CEP).
Pierce proved the AP for Abelian lattice-groups in <cit.>, and Mundici <cit.> observed that Pierce's result transfers through the functor Γ to MV-algebras. For a different proof of the AP for Abelian lattice-groups and MV-algebras, see <cit.>. The CEP for MV-algebras was proved in <cit.>; for an alternative proof, see <cit.>. For yet another proof in the more general context of residuated lattices, see <cit.>.
Most of the work will be done on the algebraic side, so it is convenient to start with an arbitrary
category with finite coproducts whose initial object is denoted 0.
As suggested above, we concentrate on the objects A such that the unique map 0 → A is epic. Notice that such an object is exactly a subterminal object in , but we prefer to avoid introducing new terminology such as `cosubterminal' or `supra-initial'.
For convenience we state here the dual of Lemma <ref>.
For any object A in , the following are equivalent:
* The map 0 → A is epic.
* The codiagonal ∇ A + A → A is an isomorphism.
* The coproduct injections in_0 , in_1 A → A + A are equal.
We shall also need a simple auxiliary fact.
Let 0→ A be epic and m B → A be a map.
If the coproduct map m + m B + B → A + A is monic then 0 → B is epic.
The following square commutes
B + B [d]_-m + m[r]^-∇ B [d]^-m
A + A [r]_-∇ A
by naturality of the codiagonal. The bottom map is an isomorphism by Lemma <ref>, and the left vertical map is monic by hypothesis. So the top map is also monic, as well as split epic.
Assume from now on that has finite colimits and that monomorphisms are stable under pushout. We stress that this stability property is quite restrictive. For instance, it does not hold in . On the other hand, we already know that it holds in by Lemma <ref>.
The map 0 → A is epic
if, and only if, for every monomorphism B → A, 0 → B is epic.
One direction is trivial and does not need stability of monomorphisms.
For the converse observe that, as monomorphisms are stable under pushout, finite coproducts of monomorphisms are monic.
So we can apply Lemma <ref>.
The following is a further auxiliary fact.
For any d A → D and e B → A in , if e is epic and the composite d e B → D is monic then d is an monic.
The right square below is trivially a pushout and, since e B → A is epic, the left square is also a pushout
B [d]_-e[r]^-e A [d]^-id[r]^-d D [d]^-id
A [r]_-id A [r]_-d D
so the rectangle is a pushout too. As the top composite is monic, and these are are stable under pushout by hypothesis, the bottom map is monic.
We emphasise the next particular case of Lemma <ref>.
Let d A → D be a regular epimorphism in .
If 0 → A is epic and 0 → D is monic then d is an isomorphism.
Assume now that our category with finite colimits and stable monomorphisms has a terminal object 1 such that for any object A in the unique A → 1 is a regular epimorphism.
This is common in algebraic categories.
A quotient of A in is an equivalence class of regular epimorphisms with domain A, where two such are equivalent if they are isomorphic as objects of A/.
An object A is simple if it has exactly two quotients, namely, those represented by A → 1 and id A → A.
So, if is an algebraic category, then an object is simple if and only if it has exactly two congruences.
To motivate the hypotheses of the following lemma observe that for every object A in , A is terminal or 0 → A is monic.
Similarly for and for K/ with K a field. In contrast, that is not the case in .
If for every object D of , D is terminal or 0 → D is monic, then for every epic 0 → A the following hold.
* A is simple or terminal.
* If m B → A is monic then B + B is simple or terminal.
To prove the first item let d A → D be a regular epimorphism. Then D is terminal or 0 → D is monic by hypothesis.
If 0 → D is monic then d is an isomorphism by Lemma <ref>.
So the only possible quotients of A are A → 1 or id A → A. So A is terminal or simple.
To prove the second item first recall that epimorphisms are closed under coproduct.
Then recall that, as monomorphisms are stable by hypotheses, they are closed under finite coproducts.
Therefore, m + m B + B → A + A is a monomorphism
and 0 = 0 + 0 → A + A is epic.
So, by Lemma <ref>, 0→ B + B is also epic. The first item implies that B + B is simple or terminal.
The material in this section applies to the case =, so we may now prove our first MV-algebraic result. For the proof we require a standard fact from the theory of MV-algebras and lattice-groups, which will also find further application later in this paper.
An ideal of the MV-algebra A is maximal if it is proper, and inclusion-maximal amongst proper ideals of A; equivalently, the quotient A/ is a simple algebra.
For every MV-algebra A, and for every maximal ideal of A, there is exactly one homomorphism of MV-algebras
_A⟶ [0,1],
and this homomorphism is injective.
In connection with the result that follows, let us explicitly recall that the initial object 0 in is the two-element Boolean algebra {0,1}.
For any MV-algebra A the following are equivalent.
* A is a subalgebra of [0,1]∩.
* A is non-trivial and the unique map 0 → A is epic.
* The unique map 0 → A is monic and epic.
* A is simple and 0 → A is epic.
If A ⊆ [0,1]∩ then A is certainly non-trivial, and <cit.> shows that the coproduct inclusions
in_0, in_1 A → A + A are equal.
So 0 → A is epic by Lemma <ref>.
The second and third items are clearly equivalent, and they imply the fourth by Lemma <ref>.
Finally, assume that A is simple and that 0 → A is epic.
By Hölder's Theorem (Lemma <ref>) together with simplicity, there is exactly one monomorphism A→ [0,1].
Now let r ∈ A and write ι A_r → A for the subalgebra of A generated by r.
As A_r is not trivial (and 0 → A is epic) Lemma <ref> implies that A_r + A_r is simple. Hence, by the computation in <cit.>, r must be rational.
§ THE Π_0 FUNCTOR FOR TOPOLOGICAL SPACES
In this section we show that the full inclusion → of the category of Stone spaces into that of compact Hausdorff spaces has a left adjoint π_0→ that preserves set-indexed products. The result just stated may be concisely referenced as follows. That the inclusion at hand is reflective is well known and flows readily from the universal property of the quotient topology. As shown in <cit.>, the reflection has “stable units”; we need not discuss this property here, except to recall that it easily implies that the left adjoint π_0 preserves finite products. Since Gabriel and Ulmer in <cit.> show that π_0 preserves cofiltered limits, π_0 preserves all products.[We are grateful to Luca Reggio and to Dirk Hofmann for pointing out to us, respectively, the relevance of <cit.> and of <cit.>.]
We give here a different proof that emphasises the key rôle of totally disconnected spaces in the general case. We first obtain a product-preserving left adjoint to the full inclusion of the category of totally disconnected topological spaces into .
We then show how to restrict this left adjoint to the categories of interest to us in the present paper.
The result just stated may be efficiently referenced as follows.
As pointed out to us by Luca Reggio, reflectivity of the inclusion → is discussed in <cit.> together with the fact that the reflection has stable units, so that the left adjoint preserves finite products. (Reggio also indicated that reflectivity is discussed in <cit.> as a consequence of the general theory of regular and exact completions.)
Moreover, Dirk Hofmann observed that, since Gabriel and Ulmer in <cit.> show that
the left adjoint π_0→ preserves cofiltered limits, π_0 preserves all products.
We give here a different proof. We first obtain a product-preserving left adjoint to the full inclusion of the category of totally disconnected topological spaces into .
We then show how to restrict this left adjoint to the categories of interest to us in the present paper. We begin by recalling some relevant definitions and facts.
A topological space X is connected if it so in the sense of Definition <ref>. A subset of a space is clopen if it is both closed and open. Then, a space X is connected if and only if it contains exactly two clopen sets, which are then necessarily ∅ and X. Equivalently <cit.>, X is connected if whenever X=A∪ B with A∩ B=∅ and A and B closed subsets of X, then exactly one of A and B is empty. If X is a space and x∈ X, the component of x in X, written C_x (with X understood), is defined as
C_x⋃{C X| x ∈ X and C is connected}⊆ X.
It can be shown that C_x is a connected subspace of X <cit.>, and it therefore is the inclusion-largest such to which x belongs. Also, C_x is closed in X <cit.>. A topological space X is totally disconnected if for each x∈ X we have C_x={x}.
Consider the equivalence relation on X given by
x∼ y if, and only if, C_x=C_y,
and define
π_0XX/∼.
We equip π_0X with the quotient topology, and call it the space of components of X. We write
q X ⟶π_0X
for the quotient map.
For every continuous map f X→ Y between topological spaces there is exactly one map such that the square below commutes.
X [d]_-f[r] [d]^-π_0fπ_0X
Y[r] π_0Y
We first show that f X→ Y preserves the equivalence relation ∼ in (<ref>). Given x,x' ∈ X, suppose x∼ x', so that C_x=C_y C. Since continuous maps preserve connectedness <cit.>, f[C] is a connected subset of Y that contains both fx and fx'. Hence f[C] C_fx∩ C_fx', which entails C_fx=C_fy. This completes the proof that f preserves ∼. Existence and uniqueness of π_0 f follow from the universal property of the quotient X →π_0 X.
Lemma <ref> implies that the assignment
that sends f to π_0f extends to an endofunctor
π_0⟶.
This endofunctor determines the full subcategory , as we now show.
If C ⊆π_0 X is a connected subspace then so is q^-1 [C] ⊆ X.
Let q^-1[C]=F_1∪ F_2 with F_1 and F_2 disjoint closed subsets of X. For any y ∈ C we can write the fibre q^-1[{y}] as C_x for any x∈ q^-1[{y}]. Further, we can express C_x as the disjoint union
C_x=(F_1∩ C_x)∪ (F_2∩ C_x). And C_x is closed and connected, because it is a component. Hence exactly one of q^-1[{y}]=C_x F_1 or q^-1[{y}]=C_x F_2 holds, for each y ∈ C. We can then define
S_i{ y ∈ C| q^-1[{y}] F_i}, i=1,2,
to the effect that C=S_1∪ S_2 and S_1∩ S_2 =∅. By construction we have F_i=q^-1[S_i], i=1,2. The definition of quotient topology then entails that S_i is closed because F_i is. Since C is connected, exactly one of S_1 and S_2 is empty, and hence so is exactly one of F_1 and F_2.
For any space X, the quotient map q X →π_0X in (<ref>) is universal from
X to the full inclusion →.
We first show that π_0 X is totally disconnected.
Let C_y be the component of y ∈π_0 X, with the intent of showing it is a singleton.
By Lemma <ref>, since C_y is connected in π_0 X, so is q^-1[C_y] connected in X. Therefore q^-1[C_y] is contained in the component C_x of any x∈ X with x∈ q^-1[C_y]; and thus, the direct image q[q^-1[C_y]] is contained in q[C_x]={y}. Since q[q^-1[C_y]]=C_y, because q is surjective, we conclude C_y{y}, as was to be shown.
Let f X→ Y be a continuous map, with Y totally disconnected.
We already know from the proof of Lemma <ref> that f preserves ∼ so,
as Y is totally disconnected, x ∼ x' in X implies f x = f x' in Y.
The universal property of the quotient q X →π_0 X implies the existence of a unique g π_0 X → Y such that g q = f.
We conclude that the full inclusion → has a left adjoint that, with no risk of confusion, will again be denoted by π_0 →.
The functor π_0 → preserves all set-indexed products.
Consider a family (X_s | s ∈ S) of spaces in indexed by a set S and let
γπ_0 ∏_s∈ S X_s ⟶∏_s∈ Sπ_0 X_s
be the unique map such that the triangle below commutes
π_0 ( ∏_s∈ S X_s) [rd]_-π_0 _s[r]^-γ ∏_s∈ Sπ_0 X_s [d]^-_s
π_0 X_s
for every s ∈ S.
In other words,
γ ( C( x_s | s∈ S )) = (C x_s | s∈ S) ∈∏_s∈ Sπ_0 X_s
for any ( x_s | s∈ S ) in ∏_s∈ S X_s.
To prove that γ is injective assume that
γ ( q ( x_s | s∈ S )) =γ ( q ( y_s | s∈ S )) in ∏_s∈ Sπ_0 X_s.
That is, q x_s = q y_s in π_0 X_s for every s ∈ S.
By <cit.> we have
q ( x_s | s∈ S ) = q ( y_s | s∈ S ) in π_0 ( ∏_s∈ S X_s), so γ is injective.
To prove that γ is surjective observe that the following diagram commutes
[ld]_-q∏_s∈ S X_s [d]^-∏_s∈ S q[r]^-_s X_s [d]^-q
π_0 ( ∏_s∈ S X_s) [r]_-γ ∏_s∈ Sπ_0 X_s [r]_-_s π_0 X_s
for every s∈ S, so the inner triangle commutes.
As products of surjections are surjective, the inner vertical map is surjective and hence so is γ, the bottom map of the triangle.
We next identify a related construction which will provide a useful alternative description of π_0 when restricted to .
Let us write (X,) for the set of continuous maps from the space X to the discrete two-point space {0,1}. There is a canonical continuous function
E = ⟨ f| f ∈(X,)⟩ X⟶^(X,),
x⟼ ( f x | f∈(X,) ).
For any subset S X, write χ_S X→ for the characteristic function defined by χ_S x=1 if, and only if, x∈ S.
Then S is clopen precisely when χ_S∈(X,). Thus, E in (<ref>)
can equivalently be described as the function that sends each point x ∈ X to the set of clopen subsets of X that contain x.
In order to prove the next lemma recall <cit.> that the quasi-component of x ∈ X is defined as
C_x⋂{S X| S is clopen, and x ∈ S}.
It is clear that the quasi-components of a space X partition X into closed non-empty sets.
The relation between E and quasi-components may be stated as follows.
For any x, x' ∈ X, E x = E x' if and only if C_x = C_x'.
If E x = E x' then clearly C_x = C_x'.
For the converse assume that C_x = C_x' and let S ⊆ X be a clopen containing x. Then x' ∈C_x' = C_x ⊆ S.
That is, x' ∈ S.
The reader should beware that the quasi-component C_x of x∈ X in general fails to be connected. Indeed, the inclusion C_xC_x always holds for each x∈ X <cit.>, and may be proper <cit.>. However:
For any X there exists a unique E' π_0 X →^(X,) such that the following diagram
X @(d,l)[rd]_-E[r]^-q π_0 X [d]^-E'
^(X,)
commutes.
Let x, x' ∈ X be such that x ∼ x'; that is, C_x = C_x'.
Then
x ∈ C_x ∩ C_x'⊆C_x ∩C_x'
so, as quasi-components are equal or disjoint, C_x = C_x'.
That is, E x = E x' by Lemma <ref>.
Let
X [r]^-D π'_0 X [r]^-m ^(X,)
be the epi/regular-mono factorization of the canonical map E in (<ref>). Then the following square commutes
X [d]_-D[r]^-q [d] π_0X @.>[ld]|-c[d]^-E'
π'_0X[r]_-m ^(X,)
by Lemma <ref> and, as q is regular-epi and m is monic, there is exactly one continuous map cπ_0(X)→π'_0(X) making the inner-triangles commute.
Since D is epic, so is c.
Also, since m is a regular mono, π_0'X carries the subspace topology inherited from the product ^(X,) and, as the latter is a Stone space, π_0'X is Hausdorff.
If X is compact Hausdorff then c π_0 X →π_0' X is an isomorphism and these isomorphic spaces are Stone spaces.
First recall <cit.> that, in any compact Hausdorff space X, the equality C_x=C_x holds for each x∈ X.
In other words, in this case, the function π_0 X →π_0' X is bijective.
Also, since X is compact, so is π_0 X because q is surjective.
Hence, as we already know that π_0' X is Hausdorff, the Closed Map Lemma implies that c is an isomorphism.
Similarly, compactness of X implies compactness of π_0' X and hence, the Closed Map Lemma implies that m is closed. Therefore, π_0'X is a closed subspace of the Stone space ^(X,).
It is classical that each Stone space is totally disconnected, so there is a full inclusion → such that the following diagram
[d] [r] [d]
[r]
commutes. Lemma <ref> implies that the composite
[r] [r]^-π_0
factors through the full inclusion →.
The factorization will be conveniently denoted by π_0 →.
The functor π_0→ is left adjoint to the full inclusion →, and preserves all set-indexed products.
Since, as observed above, π_0→ restricts to π_0→, the fact that the former is a left adjoint to → (Lemma <ref>) restricts to the fact that π_0→ is left adjoint to →.
It is standard that products in and in agree with products in (using, in particular, Tychonoff's Theorem that any product of compact spaces is compact), so Proposition <ref> entails that π_0→ preserves all set-indexed products.
§ SPECTRA OF MV-ALGEBRAS
In this section we recall the material about spectra of MV-algebras that is needed in the sequel.
Recall that an ideal of an MV-algebra A is prime if it is proper, and the quotient A/ is totally ordered. The (prime) spectrum of an MV-algebra A is
A{⊆ A| is a prime ideal of A}
topologised into the spectral space of A, as follows. For a subset S A, define
(S) {∈A| S⊆},
(S) A∖(S)={∈A| S⊈}.
The set (S) is called the vanishing locus, or zero set, of S, while (S) is called its support. If a ∈ A, write (a) as a shorthand for ({a}), and similarly for (a). Then the collection
{(I)| I is an ideal of A}
is the set of closed sets for a topology on A that makes the latter a spectral space in the sense of Hochster <cit.>. The collection
{(a)| a∈ A}
is a basis of compact open sets for this topology; see <cit.> and <cit.>. The topology is variously known as the Stone, Zariski, or hull-kernel topology of A.
The assignment A ↦A extends to a functor →, because inverse images of primes ideals along homomorphisms are prime. Althouh it is common to take the codomain of as the category of spectral spaces and spectral maps, for our purposes in this paper it is expedient to regard as taking values in .
The maximal spectrum of an MV-algebra A is
A{⊆ A| is a maximal ideal of A}.
We have AA, or equivalently, any simple MV-algebra is totally ordered (see e.g. <cit.>).
The maximal spectral space of A is the set A equipped with the subspace topology it inherits from A. Then A is a compact Hausdorff space <cit.>, and every compact Hausdorff space arises in this manner from some MV-algebra A <cit.>.
The standard example of MV-algebra, the interval
[0,1] equipped with the constant 0 and the operations ⊕, , generalises as follows. If X is any set, the collection [0,1]^X of all functions from X to [0,1] inherits the structure of an MV-algebra upon defining operations pointwise. If, additionally, X is a topological space, since ⊕ [0,1]^2→ [0,1], [0,1]→[0,1], and 0 are continuous with respect to the Euclidean topology of [0,1], the subset
(X){f X→ [0,1]| f is continuous}
is a subalgebra of the MV-algebra [0,1]^X. We shall describe a natural MV-homomorphism η_A A ⟶(A), for each MV-algebra A. Its existence descends from Hölder's Theorem (Lemma <ref>), which allows us to define a close analogue to the Gelfand transform in functional analysis. Indeed, in light of that result, to a∈ A and ∈A we associate the real number _(a / )∈ [0,1], obtaining the function
aA ⟶ [0,1]
⟼ h_(a).
It can be shown <cit.> that the function (<ref>) is continuous with respect to the Stone topology of A and the Euclidean topology of [0,1].
We thereby arrive at the announced homomorphism
η_A A ⟶(A)
a ⟼a
for each MV-algebra A.
For any MV-homomorphism h A→ B and any ∈B we have h^-1()∈A. Moreover, the inverse-image map h^-1B→A is continuous with respect to the Stone topology.
The first assertion is proved in <cit.>. The second assertion is a straightforward verification using the definition of Stone topology.
In light of Lemma <ref> we henceforth regard as a functor:
⟶^ op,
where denotes the category of compact Hausdorff spaces and their continuous maps.
Given a continuous map f X → Y in , it is elementary that the induced function
(f)(Y) ⟶(X),
g∈(Y) ⟼ g∘ f ∈(X)
is a morphism in . We therefore regard as a functor:
^ op⟶.
There is an adjunction
⊣^ op→
known as the Cignoli-Dubuc-Mundici adjunction <cit.>; see <cit.> for further references and details not mentioned below.
Dually to (<ref>), for any space X in there is a continuous map
ϵ_X X ⟶(X)
x ⟼{f∈(X)| f(x)=0},
and it is a standard fact that ϵ_X is a homeomorphism. (Compare <cit.>.)
Writing 𝕀 C for the identity functor on a category C, we can summarise the adjunction as follows.
The functor is left adjoint to the fully faithful , i.e. ⊣^ op→. The unit and the counit of the adjunction are the natural transformations η𝕀→ and ϵ→𝕀^ op whose components are given by (<ref>) and (<ref>), respectively.
§ THE PIERCE FUNCTOR PRESERVES COPRODUCTS
The category of Boolean algebras may be identified with the domain of the full subcategory → determined by the MV-algebras whose operation ⊕ is idempotent. It is then clear that → is a variety so, in particular, it has a left adjoint.
It also has a right adjoint that we now describe.
We write A for the collection of all Boolean elements of the MV-algebra A. By <cit.>, A is the largest subalgebra of A that is a Boolean algebra. A homomorphism h A→ B preserves Boolean elements, because the latter are defined by equational conditions. Therefore, h induces by restriction a function hA→B that is evidently a homomorphism of Boolean algebras. We thus obtain a functor
⟶
from the category of MV-algebras to that of Boolean algebras; we call it the Pierce functor because of the close analogy with the theory developed in <cit.> for rings.
The functor is right adjoint to the functor .
This is a direct consequence of the fact that A is the largest Boolean subalgebra of A, for any MV-algebra A.
The proof of Proposition <ref>—in particular, Lemma <ref>—makes it clear that → is essentially the `complemented subobjects' functor determined by the extensive category .
We now embark on the proof of the central fact that → preserves coproducts. Our aim is to reduce the problem to a situation where we can apply the topological results in Section <ref>.
For any MV-algebra A and any element a∈ A, a is Boolean if, and only if, for each prime ideal of A, we have a/∈{0,1} A/.
Let C be any totally ordered MV-algebra. For x∈ C, either x≤ x or x ≤ x. If the former holds then x∧ x=x, so that if x is Boolean then x=0. If the latter holds then x∨ x=x, and thus x=1 if x is Boolean. In summary, if x∈ C is Boolean then x∈{0,1}. The converse implication is clear. Summing up, the Boolean elements of C are precisely 0 and 1.
Boolean elements, being definable by equational conditions, are preserved by homomorphisms. Hence if a is Boolean then a/∈ A/ is Boolean, and therefore, since A/ is totally ordered, a/∈{0,1} by the argument in the preceding paragraph. This proves the left-to-right implication in the statement of the lemma.
For the converse implication, we recall that in any MV-algebra A we have ⋂A={0} <cit.>. Hence, the function ι A ⟶∏_∈A A/ defined by a ∈ A ⟼ (a/)_∈∈∏_∈A A/ is an injective homomorphism. Assume that for each ∈A we have a/∈{0,1}. Since operations in ∏_∈A A/ are computed pointwise, we infer ι(a)∨ι(a)= (a/)_∈∨ (a/)_∈=1, and therefore, since ι is an isomorphism onto its range, a∨ a=1. This completes the proof.
Let A be an MV-algebra, and suppose there exist possibly empty closed subsets X_0,X_1A with A=X_0∪ X_1 and X_0∩ X_1=∅. Then there exists exactly one Boolean element b∈ A such that b/=0 for each ∈ X_0 and b/=1 for each ∈ X_1.
By <cit.>, there is exactly one ideal I_i of A such that (I_i)=X_i, i=0,1. Consider the elements 0,1∈ A. The fact that A is partitioned into X_0 and X_i entails I_0∨ I_1=A and I_0∩ I_1={0}, so that the Chinese Remainder Theorem <cit.> applied to 0 and X_0, and to 1 and X_1, yields one element b∈ A such that b/I_0=0 and b/I_1=1. Using the Third Isomorphism Theorem, the latter conditions imply b/∈{0,1} for each ∈A so that by Lemma <ref> we conclude that b is Boolean. If b'∈ A also satisfies b'/=0 for each ∈ X_0 and b'/=1 for each ∈ X_1, then b/=b'/ for ∈A, so that b=b' because ⋂A={0} <cit.>.
We record a corollary that will have further use in the paper. It is the exact analogue for MV-algebras of a standard result for the category , see e.g. <cit.>. In order to state it, let us write X for the Boolean algebra of clopen sets of any topological space X. Let us then observe that the uniqueness assertion about the Boolean element b in Lemma <ref> allows us to define, for any MV-algebra A, a function
χ_A(A)⟶A
that assigns to each X_0∈(A) the unique element b∈A with the properties stated in that lemma with respect to X_0 and X_1A∖ X_0. It is then elementary to verify that χ_A is a homomorphism of Boolean algebras.
For any MV-algebra A, the function
ϕ_AA ⟶(A)
that sends b∈A to (b)(A) is an isomorphism of Boolean algebras whose inverse is the homomorphism χ_A in (<ref>). In particular, A is indecomposable if, and only if, A is connected.
By Lemma <ref> it is clear that (b) for each b∈A is clopen and that ϕ_A is a homomorphism. Let us consider b'χ_Aϕ_Ab. For each ∈(b) we have b/=0 by definition of , and b'/=0 by the defining property of b'. Similarly, for each ∈A∖(A) we have b/=b'/=0. Thus, b and b' agree at each prime and thus b=b' because ⋂A={0} <cit.>. Conversely, for X_0∈(A), consider the clopen ϕ_Aχ_AX_0. For ∈A, by definition of χ_A we have ∈ X_0 if, and only if, (χ_AX_0)/=0. Hence
ϕ_A (χ_A X_0)=X_0, and the proof is complete.
The radical of A is the ideal
A⋂A.
In accordance with standard terminology in general algebra, one says A is semisimple precisely when A={0}.
We note in passing that, unless A is semisimple, the statement in Lemma <ref> cannot be strenghtened to “a is Boolean if, and only if, for each ∈A we have a/∈{0,1} A/”.
Let A be an MV-algebra, and suppose there exist possibly empty closed subsets X_0,X_1A with A=X_0∪ X_1 and X_0∩ X_1=∅. Then there exists exactly one Boolean element b∈ A such that b/=0 for each ∈ X_0 and b/=1 for each ∈ X_1.
By <cit.>, each ∈A is contained in exactly one λ∈A, so that we can define a function
λA ⟶A,
⟼λ.
By <cit.>, this function is continuous, and it is a retraction for the inclusion AA. Therefore, X'_0λ^-1[X_0] and X'_1λ^-1[X_1] are closed subsets of A satisfying A=X'_0∪ X'_1 and X'_0∩ X'_1=∅. Now Lemma <ref> provides a unique Boolean element b such that b/=0 for each ∈ X_0', and b/=1 for each ∈ X_1'. As X_i X_i', i=0,1, b satisfies the condition in the statement. Concerning uniqueness, suppose a is a Boolean element of A such that a/=0 for each ∈ X_0, and a/=1 for each ∈ X_1. We claim a=b. Indeed, let ∈ X_i', i=0,1. Then a/λ=i because λ∈ X_i. The inclusion λ induces a quotient map q A/→ A/λ. By Lemma <ref> we have a/∈{0,1}. Also, A/λ is nontrivial. Therefore since q(a/)=a/λ=i it follows that a/=i. By the uniqueness assertion in Lemma <ref> we conclude a=b.
We observe that the analogue of Lemma <ref> about coproduct decompositions of A being indexed by idempotent elements does not hold in general for rings. Indeed, spectra of MV-algebras always are completely normal—which affords the existence of the map λ used in the proof above—whereas spectra of rings are not, in general.
For more on the important rôle that the continuous retraction λ in (<ref>) plays in the theory of lattice-groups and MV-algebras, see <cit.> and the references therein.
Our next objective is to show that sends the unit η of ⊣ in (<ref>) to an isomorphism.
For any MV-algebra A, the morphism η_A A→ ()A is an isomorphism.
Let b'∈(A) be Boolean, with the aim of exhibiting b∈A such that η_A(b)=b'. Evaluating the defining equality b'⊕ b'=b' at each ∈A we see that b'()∈{0,1} holds. Therefore, the two closed subsets X_0 b'^-1[{0}] and X_1 b'^-1[{1}] of A satisfy the hypotheses of Lemma <ref>. We conclude that there exists one Boolean element b∈ A with b/=0 for ∈ X_0 and b/=1 for ∈ X_1. By the definition of η_A this entails at once η_A(b)=b', so η_A is surjective. By the uniqueness statement in Lemma <ref>, η_A is also injective.
Our next step will be to factor into a manner that is useful to our purposes.
Lemma <ref> implies that the functors → in the diagram below
[d]_-[r]^-
^ op[r]_- [u]_-
are naturally isomorphic.
The functor ^ op→ preserves all set-indexed coproducts.
Using Stone duality, it is an exercise to verify that the composite functor ^ op→ induces, by taking opposite categories on each side, a functor naturally isomorphic to the functor π_0→ of Section <ref>. The lemma then follows from Theorem <ref>.
We finally obtain the main result of this section.
The Pierce functor → preserves all set-indexed coproducts.
As we saw above, the triangle (<ref>) commutes up to a natural isomorphism. Further, preserves arbitrary set-indexed colimits because it is left adjoint by Theorem <ref>; and
preserves set-indexed coproducts by Lemma <ref>. Hence preserves set-indexed coproducts.
§ MAIN RESULT, AND FINAL REMARKS
Let be a coextensive category.
Recall from the introduction that an object A in is separable if A is decidable as an object in the extensive . Thus, A is separable if, and only if, there is a morphism f A + A → A such that the span
A [l]_-∇ A + A [r]^-f A
is a product diagram.
Separable MV-algebras coincide with finite products of subalgebras of [0,1]∩.
By Theorem <ref> we have an reflection π_0 ⊣→ such that both adjoints preserve finite products and finite coproducts, so Proposition <ref> implies that every decidable object in is a finite coproduct of subterminal objects. Theorem <ref> completes the proof.
We conclude the paper with some final remarks that point to further research aimed at developing an ‘arithmetic connected-component functor’.
The guiding result from Algebraic Geometry is this: the category of étale schemes over K is reflective as a subcategory of that of locally algebraic schemes over K <cit.>. The left adjoint there is denoted by π_0, and π_0 X is called the k-schéma des composantes connexes de X
in Definition I, 4, 6.6 op. cit. Moreover, it is then proved that π_0 preserves finite coproducts.
In terms of extensive categories, this says that for =, the subcategory → has a finite-product preserving left adjoint.
We announce that the same holds for = _ fp, where _ fp is category of finitely presetable MV-algebras. The proof will be published elsewhere, but it is appropriate to indicate here the rôle of locally finite MV-algebras in connection with that result.
An MV-algebra A is locally finite if each finitely generated subalgebra of A is finite. Finite MV-algebras are evidently locally finite; [0,1]∩ is an example of a locally finite MV-algebra that is not finite. Locally finite MV-algebras were studied in <cit.>; see also <cit.> for a generalisation of the results in
<cit.>, and <cit.> for further material and <cit.> for recent progress on the topic.
The connection with Theorem <ref> is the following characterisation of rational algebras.
For any MV-algebra A the following are equivalent.
* A is simple and locally finite.
* A is a subalgebra of [0,1]∩.
(<ref>)⇒(<ref>). By Hölder's Theorem (Lemma <ref>), since A is simple there is exactly one monomorphism A→ [0,1]; let us therefore identify A with a subalgebra of [0,1]. If A contains an irrational number ρ∈ [0,1] then the subalgebra generated by ρ is infinite. Indeed, the Euclidean algorithm of successive subtractions applied to ρ,1∈ does not terminate (because ρ and 1 are incommensurable) and produces an infinite descending sequence of distinct, non-zero elements of A. Thus, A ⊆ [0,1]∩ by local finiteness.
(<ref>)⇒(<ref>). Any subalgebra of [0,1] evidently has no proper non-trivial ideal, by the Archimedean property of the real numbers, and is therefore simple. If, moreover, A⊆ [0,1]∩, the subgroup of generated by finitely many a_1,…,a_n∈ A together with 1 is discrete, and therefore by <cit.> the subalgebra generated by a_1,…,a_n is a finite chain. Thus A is locally finite.
An MV-algebra A is separable if, and only if, A is locally finite and A is finite.
If A is separable then, by Theorem <ref>, A = ∏_i∈ I A_i with I finite and A_i ⊆ [0,1]∩ for each i ∈ I.
In particular, A is finite.
Also, each A_i is locally finite by Lemma <ref>.
As finite products of locally finite algebras are locally finite, A is locally finite.
Conversely, assume that A is locally finite and A is finite.
Then, A = ∏_i∈ I A_i with I finite and A_i directly indecomposable for each i ∈ I.
As locally finite algebras are closed under quotients, each A_i is locally finite.
Hence, each A_i is locally finite and indecomposable.
But then A must be simple. Indeed, Corollary <ref> entails that A is connected, and A=A by <cit.>. Then the spectral space A is Hausdorff, and thus has a base of clopen sets—hence, being compact, it is a Stone space. Since Stone spaces are totally disconnected, connectedness of A entails that A is a singleton, so A has exactly two ideals, and so is simple.
By Lemma <ref>, A is then a subalgebra of [0,1]∩. Therefore, A is separable by Theorem <ref>.
Now, let → be the full subcategory determined by locally finite MV-algebras.
Let us prove that this subcategory is coreflective.
An element a of an MV-algebra A is of finite order-rank[The terminology we introduce here is best motivated using lattice-groups—please see Appendix <ref>.] if the subalgebra B it generates in A is finite. If B is terminal, we say the order-rank of a is zero. Otherwise, there exists exactly one n∈{1,2,…} such that B=C_1×⋯× C_n with each C_i directly indecomposable and non-terminal, and we then say the order-rank of a is n. We set
A{a ∈ A | a is of finite order-rank}.
Note that AA, because any Boolean algebra is locally finite.
For any MV-algebra A and subset G A, let us write G for the subalgebra of A generated by G. When G={g} we write g for {g}.
Any homomorphism of MV-algebras sends elements of finite order-rank to elements of finite order-rank.
Let h A→ B be a homomorphism and let a ∈A. Since h commutes with operations, a routine argument in general algebra shows that h[Sa]=(ha); since a is finite, so is (ha).
For any MV-algebra A, A is a locally finite subalgebra of A. Further, A is the inclusion-largest locally finite subalgebra of A.
Let F{a_1,…,a_n} A be a finite subset of elements of finite order-rank, n≥ 0 an integer. We need to show that the subalgebra F of A generated by F is finite. Induction on n. If n=0 then ∅ is either the terminal one-element algebra or the initial two-element algebra. Now suppose G{a_1,…, a_n-1} is such that G is finite. The subalgebra a_n is also finite, because a_n is of finite order-rank by hypothesis. The subalgebra F is the least upper bound of G and of a_n in the lattice of subalgebras of A, and therefore can be written as a quotient of the coproduct G+a_n. In more detail, by the universal property of the coproduct, the inclusion maps GF and a_nF induce a unique homomorphism hG+a_n→ A whose regular-epi/mono factorisation h=m q is such that m S→ A exhibits the subobject of A that is the join of the subobjects G and a_n—in particular, S is isomorphic to F. So F is a quotient of the algebra G+a_n. Since finite coproducts of finite MV-algebras are finite by <cit.>, G+a_n is finite and therefore so is F.
To show that A is a subalgebra of A, first note that clearly 0∈A. If a∈A then a lies in the subalgebra generated by a, which is finite; hence a is of finite order-rank. If a,b ∈A, then a⊕ b lies in the subalgebra generated by {a,b}, which is finite by the argument in the preceding paragraph; hence a⊕ b is of finite order-rank.
For the last assertion in the statement, let B be a locally finite subalgebra of A. Given any b ∈ B, the subalgebra generated by b in A is finite, by our assumption about B; hence b is of finite order-rank, and b∈ A. This completes the proof.
Lemmas <ref> and <ref> allow us to regard as a functor
⟶.
The functor ⟶ is right adjoint to the full inclusion ⟶.
This is an immediate consequence of the fact that A is the largest locally finite subalgebra of the MV-algebra A, as proved in Lemma <ref>.
It is proved in <cit.> that has all set-indexed products. This follows at once from Corollary <ref>: indeed, for any set-indexed family {A_i}_i ∈ I of locally finite MV-algebras the product of {A_i}_i ∈ I in is the coreflection (∏_i ∈ IA_i) of the product ∏_i ∈ IA_i in .
We have been unable to prove that → preserves finite products. However, writing for _ fp, we can show that the functor restricts to a left adjoint π_0 → to the inclusion
→ and, moreover, it preserves finite products.
As mentioned, the proof will appear elsewhere.
§ SEPARABLE UNITAL LATTICE-ORDERED ABELIAN GROUPS
For background on lattice-groups we refer to <cit.>. We recall that a lattice-ordered group, or ℓ-group for short, is a group that is also a lattice[In this appendix, lattices are only required to have binary meets and joins, but not top or bottom elements.] such that the group operation distributes over binary meets and joins. We only consider Abelian ℓ-groups, and thus adopt additive notation. The underlying group of an Abelian ℓ-group is torsion-free, and its underlying lattice is distributive. Write for the category of Abelian ℓ-groups and of lattice-group homomorphisms. An element 1∈ G in an Abelian ℓ-group is a (strong order) unit if for each g∈ G there is a natural number n such that n1≥ g. An Abelian ℓ-group G equipped with a distinguished unit 1 is called unital, and denoted (G,1). Write _1 for the category of unital Abelian ℓ-groups and of unit-preserving lattice-group homomorphisms.
There is a functor Γ_1→ that acts on objects by sending (G,1) to its unit interval [0,1]{x∈ G| 0≤ x ≤ 1}, and on morphisms by restriction; here, [0,1] is regarded as an MV-algebra under the operations x⊕ y (x+y)∧ 1, x 1-x, and 0. This functor has an adjoint Ξ→_1, and Mundici proved in <cit.> that Γ and Ξ constitute an equivalence of categories.
The initial object in _1 is (,1), and the terminal object is the trivial unital ℓ-group ({0=1}, 0).
In analogy with the relationship between non-unital and unital rings, the category has a zero object and is not coextensive, while the category _1 is. Separable unital Abelian ℓ-groups are defined as for any coextensive category, cf. the beginning of Section <ref>.
An object G of is Archimedean if whenever nx≤ y holds in G for each positive integer n, then x≤ 0; and an object (G,1) of _1 is called Archimedean if G is. The following characterisations hold: (G,1) is Archimedean precisely when Γ(G,1) is semisimple; and (G,1) is totally ordered and Archimedean precisely when Γ(G,1) is simple. Hölder's Theorem for the category _1 may be stated as follows: Any (G,1) that is Archimedean and totally ordered has exactly one morphism to (,1), and that morphism is monic (equivalently, its underlying function is injective).
Let us say that an object (G,1) of _1 is rational if it is isomorphic to an ordered subgroup
of the additive group containing 1, where the order of G is inherited from the natural order of the rationals. Theorem <ref> may be then formulated for the category _1 as follows.
For any unital Abelian ℓ-group (G,1) the following are equivalent.
* (G,1) is rational.
* (G,1) is non-trivial, and the unique map (,1) → (G,1) is epic.
* The unique map (,1) → (G,1) is monic and epic.
* (G,1) is totally ordered and Archimedean, and the unique map (,1) → (G,1) is epic.
An object (G,1) of _1 is Specker if its unit-interval MV-algebra Γ(G,1) is a Boolean algebra. Write _1 for the full subcategory of _1 on the the Specker objects. The inclusion functor _1→_1 has a right adjoint _1→_1, the Pierce functor for _1, and preserves arbitrary coproducts (Theorem <ref>). Our main result, Theorem <ref>, would be proved for the category _1 using this Pierce functor; it can be phrased as follows.
Separable unital Abelian ℓ-groups coincide with finite products of rational unital Abelian ℓ-groups.
Products in the category are Cartesian products, because is a variety of algebras. On the other hand, while _1 is equivalent to a variety by Mundici's cited theorem, its underlying-set functor is not right adjoint. Indeed, products in _1 are not, in general, Cartesian products. However, finite products in _1 are Cartesian—the product of (G,1) and (H,1) is (G× H, (1,1)) with the Cartesian projections.
An Abelian ℓ-group is called a simplicial group if it is isomorphic in to a free Abelian group of finite rank ^r equipped with the coordinatewise order. A unit in such a simplicial group is then any element 1∈^r whose each coordinate is strictly positive; the pair (^r,1) is called a unital simplicial group. These lattice-groups play a key rôle in the representation theory of dimension groups, see e.g. <cit.>.
An object (G,1) in _1 is a unital simplicial group exactly when its unit-interval MV-algebra Γ(G,1) is finite. An object (G,1) is locally simplicial if each sublattice subgroup generated by finitely many elements along with 1 is a unital simplicial group. An object (G,1) in _1 is locally simplicial exactly when its unit-interval MV-algebra Γ(G,1) is locally finite. Then: An object (G,1) of _1 is separable just when it is locally simplicial, and (G,1) has finite -module rank[In the literature on lattice-groups, the condition that (G,1) has finite rank is expressed in the following traditional manner: the unit of G has finitely many components.] (Corollary <ref>).
Write _1 for the full subcategory of _1 on the locally simplicial objects. The inclusion functor _1→_1 has a right adjoint _1→_1 (Corollary <ref>); that is, every (G,1) has an inclusion-largest locally simplicial unital sublattice subgroup. To prove this in the category _1 one would introduce the notion of element of `finite-order rank' of a unital Abelian ℓ-group. It is this notion that motivates the terminology we adopted in the context of MV-algebras in Section <ref>; by way of conclusion of this appendix, we offer a short discussion.
Let (G,1) be a unital Abelian ℓ-group, let g∈ G, and let H be the sublattice subgroup of G generated by g and by 1. If (H,1) is a unital simplicial group (^r,1)—equivalently, if the MV-algebra Γ(H,1) is finite—then we call g an element of finite order-rank r. This notion of rank crucially depends on the interplay between the lattice and the group structure, and is not reducible to the linear notion of rank. To explain why, let us preliminarly observe that a simplicial group ^r enjoys the finiteness property that its positive cone (^r)^+—that is, the monoid of non-negative elements of ^r—is finitely generated as a monoid. Next, let us point out that the underlying group of the Abelian ℓ-group H generated by g and 1 in G is necessarily free: indeed, any finitely generated object of has free underlying group, as was proved in <cit.>. The -module rank of H is at most countably infinite, because H is countable. But even if we assume the rank of H is finite, the unit-interval Γ(H,1) may be infinite, and in that case the lattice order of ^r≅ H cannot be simplicial—and indeed, one can prove that the monoid H^+ cannot be finitely generated. Hence, the condition that the sublattice subgroup H of G generated by g and 1 is simplicial is strictly stronger than the condition that H has finite -module rank. To illustrate, consider the subgroup H of generated by an irrational number ρ∈ together with 1; then H≅^2 as groups, the total order inherited by ^2 from is palpably not simplicial, the positive cone H^+ can be shown not to be finitely generated by an easy direct argument, and Γ(H,1) is an infinite simple MV-algebra.
amsplain
|
http://arxiv.org/abs/2307.04000v1 | 20230708155126 | Synthesis of resonant modes in electromagnetics | [
"Antonello Tamburrino",
"Carlo Forestiere",
"Giovanni Miano",
"Guglielmo Rubinacci",
"Salvatore Ventre"
] | physics.optics | [
"physics.optics",
"physics.class-ph"
] |
Department of Electrical and Information Engineering M. Scarano, Università degli Studi di Cassino e del Lazio Meridionale, Via G. Di Biasio n. 43, 03043 Cassino (FR), Italy.
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI-48824, USA.
e-mail: [email protected]
Department of Electrical Engineering and Information Technology, Università degli Studi di Napoli Federico II, via Claudio 21, Napoli, 80125, Italy
Department of Electrical and Engineering Information M. Scarano, Università degli Studi di Cassino e del Lazio Meridionale, Via G. Di Biasio n. 43, 03043 Cassino (FR), Italy.
Resonant modes determine the response of electromagnetic devices, including dielectric and plasmonic resonators. Relying on the degrees of freedom that metamaterials provide, this contribution shows how to design, at will, the resonant modes of a dielectric object placed in an unbounded space. Specifically, the proposed method returns in analytical form the spatial distribution of the dielectric susceptibility tensor for which the object exhibits resonances at prescribed frequencies and spatial distribution of the polarization. Together with the synthesis of the material, two key concepts are introduced: the controlled tunability of the resonant modes and the number of essential modes, i.e. the number of modes that uniquely characterize the spatial distribution of the dielectric susceptibility.
Moreover, this approach can be applied to design the resonant modes of any system where the constitutive relationship is linear and local.
Synthesis of resonant modes in electromagnetics
Salvatore Ventre
August 12, 2023
================================================
Media with spatially inhomogeneous refractive index have fascinated the humankind for millennia, exhibiting counter-intuitive effects such as mirages, or fata morgana. Archaeological evidence indicates that humans learned how to engineer the refractive index variations to make lenses in antiquity, spanning several millennia. More recently, nano-fabrication techniques, the discovery of materials with tunable permittivity, and the introduction of the metamaterial concept <cit.> have greatly expanded the landscape of feasible permittivity distributions for the electromagnetic design. Anisotropic and even continuous effective variations of the permittivity can be now implemented.
Using the degrees of freedom in the choice of the materials, it is possible to control the electromagnetic field as shown by Pendry et al <cit.> by introducing trasformation optics <cit.>. They showed that, the permittivity and permeability effectively determine a curved spatial geometry for the electromagnetic field. Thus, leveraging on this analogy, they showed how the anisotropic and inhomogeneous permittivity and permeability profiles to redirect the electromagnetic field in a prescribed way. Recently, several optimization methods have been introduced to design materials to achieve a prescribed electromagnetic response, incorporating at the same time fabrication constraints
<cit.>.
In this manuscript, we take a fresh path to the design of electromagnetic resonances of a scatterer, which plays a central role in electromagnetic devices, e.g. <cit.>. Plasmonic and dielectric nano-resonators are an interesting example. When the resonance condition is met, the near-field and far-field characteristics of the device are dominated by the corresponding resonant mode.
We introduce a theoretical framework that enables the synthesis of the spatial distribution of the permittivity profile of a dielectric object, to design its resonant modes, i.e. polarization current density distributions. The designer preliminary specifies, in the spatial domain occupied by the object, one or several modes, together with the corresponding resonant frequencies. Then, the synthesis process returns the possibly inhomogeneous and anisotropic permittivity profile which guarantees that the dielectric object exhibits the prescribed modes at the specified resonance frequencies. It is a direct method: it does not require the use of any optimization approaches, but explicitly returns the analytical solution in a single step. The syntheses approach leverages on a formulation of the generalized eigenvalue problem where the contributions of the material and of the electromagnetic field are separated. Yet, this approach is very general: it can be applied to any system where the constitutive relationship is linear and non-spatially dispersive. For instance, it can be used to design the properties of an elastic material to control its vibrational modes.
In addition, the proposed framework allows one to clearly identify the physical feasibility and limitations inherent to the problem of the design of the modes. The main outcome is that the maximum number of modes (essential modes) that can be prescribed at a given resonance frequency, is equal to the dimension of the problem (two for a 2D problem and three for a 3D problem). These are inherent physical limits unveiled by the proposed framework.
Finally, we also address the problem of the tunability where, by scaling the dielectric susceptibility, we can change completely the resonance property in a controlled way. This feature enables the design of tunable materials, where one can adapt the response of the material dynamically, according to specific needs.
§ MODES AND EIGENVALUE PROBLEM
We consider a linear, nonmagnetic and non-spatially dispersive dielectric of finite size, shown in Fig. <ref>. We denote the space occupied by the dielectric with Ω, its boundary by ∂Ω, the (unit vector) normal to ∂ V that points outward by 𝐧.
Under these assumptions, the polarization density 𝐏 is given by 𝐏( 𝐫,ω)
= ε_0χ( 𝐫,ω)
·𝐄( 𝐫,ω), where is
the dielectric susceptibility tensor, ω is the angular frequency
(the e^jω t time behavior is assumed), ε_0 is the vacuum permittivity, and · corresponds to the
usual dot product between tensors and vectors.
When the dielectric scatterer is excited by an external electric field 𝐄^i, the total electric field 𝐄 can be written as the sum of 𝐄^i and of the reaction field 𝐄^𝙿 due to the presence
of the polarization current density jω𝐏. The constitutive
relation can be written as
1/ε_0( 𝐫, ω)
·𝐏( 𝐫, ω) - (
𝐫, ω) = 𝐄^i( 𝐫, ω)
in Ω,
where tensor is the pointwise inverse of
, i.e. ( 𝐫,ω)
=^-1( 𝐫,ω).
Let ℰ( ω)
be the operator giving the electric field produced by a prescribed polarization
density field 𝐏 radiating in the free space at frequency ω <cit.>:
𝐄^P( 𝐫) =jω∫_Ω𝐆
( 𝐫-𝐫^') 𝐏( 𝐫^') dS^'
where 𝐆 is the proper electric-electric dyadic Green function.
For any prescribed angular frequency ω, the electromagnetic scattering is
governed by the integral equation
1/ε_0·𝐏 - ℰ(
ω) 𝐏=𝐄^i in Ω.
Two particularly significant auxiliary eigenvalue problems can be defined starting from Eq. <ref>, setting the exciting field to zero, and assigning the material tensor .
Quasi Normal Modes <cit.> (QNM) are nontrivial solutions ω and 𝐏 of
ℰ( ω) 𝐏=1/ε_0
·𝐏 in Ω.
QNM are often used to characterize micro- and nano- resonators <cit.>, enabling the calculation of synthetic parameters such as the quality factor, the mode volume <cit.>, and the Purcell factor. QNM are also used to expand the response of micro-nanoresonators by <cit.> highlighting the contribution of the individual modes in the overall scattering response. The eigen-frequencies ω are complex numbers, i.e. ω∈ℂ, and (ω, 𝐏) forms a (generalized)
eigenvalue/eigenvector pair.
Material Modes are nontrivial solution ξ∈ℂ and 𝐏 of
ℰ( ω) 𝐏=ξ1/ε_0
·𝐏 in Ω,
where the frequency ω∈ℂ is prescribed.
ξ and 𝐏 form a (generalized)
eigenvalue/eigenvector pair.
These modes for ω∈ℝ and uniform and isotropic
material ((𝐫) = χ scalar constant in Ω) have been already investigated in <cit.>, and have been used to expand the electromagnetic response of nano-resonators <cit.>, and also to design the scalar permittivity of a homogeneous object to achieve a prescribed scattering response, such as scattering cancellation or maximization <cit.>.
In this work χ may be non uniform and/or non isotropic, and ω may be complex. The characteristic feature of the eigenvalue/eigenvector pair for (<ref>) is to be a
homogeneous function of , i.e. if ^'=α then
𝐏^' =𝐏; 1/ξ^' =α1/ξ
is an eigenvalue/eigenvector pair for ^'. Specifically, the eigenvector 𝐏 is a 0-degree homogeneous function, whereas the reciprocal of the eigenvalue ξ is a 1-degree homogeneous function.
After this property, we term these modes as Homogeneous Material Modes. Homogeneous Material Modes have been successfully introduced in low-frequency electromagnetism for eddy current tomography <cit.>.
A unique feature of Material Modes and, more in general, of Homogeneous Material Modes, is that since the eigenvalue ξ and the eigenvector are homogeneous function of χ, it is possible to tune on different resonant modes the electromagnetic system by scaling the susceptibility. This feature, which we call tunabilty, opens the door to a systematic design of reconfigurable materials and will be discussed in detail in a subsequent Section.
§ SYNTHESIS OF MODES (SOM)
In this Section, we introduce a theoretical framework enabling the synthesis of the dielectric permittivity tensor =
( 𝐫, ω) of the object, such that it exhibits the set of resonance modes {(ω_k,ξ_k,𝐏_k) }_k=1… N at prescribed frequencies ω_k. Each individual mode is described by the triplet ( ω_k,ξ_k,𝐏_k). Hereafter, ω_k is referred as the frequency eigenvalue, ξ_k as the material eigenvalue, and 𝐏_k as the spatial mode. The problem consists in solving for a proper γ_k ( 𝐫) = γ( 𝐫, ω_k ), the set of equations imposing the modes
ℰ( ω_k ) 𝐏_k=ξ_k 1/ε_0_k ·𝐏_k in Ω, for k=1, …, N.
The synthesis is carried out in two steps. First, we solve the problem at each prescribed angular frequency ω_k, by evaluating γ_k, as solution of (<ref>). Then, we interpolate in the frequency domain the collection of tensors χ_1, …, χ_N, being χ_k = γ_k^-1
Hereafter, we consider the _z scenario where the electromagnetic problem is x_3- invariant and
the electric field is transverse to the x_3-axis. This is a 2D case where
the tensor is of the type ( 𝐫, ω) =∑_l,m=1^2χ_lm( 𝐫, ω) 𝐞
_l 𝐞_m, the electric field is 𝐄( 𝐫, ω) =E_1( 𝐫, ω) 𝐞_1+E_2(
𝐫, ω) 𝐞_2, 𝐫=x_1𝐞_1
+x_2𝐞_2 and 𝐞_1 and 𝐞_2 are the unit
vectors along the x_1 and x_2 directions, respectively. The elements of the Green function are given in Appendix <ref>.
§.§ Synthesis of Modes at a prescribed angular frequency
Given a prescribed angular frequency ω_k, we distinguish two cases:
(i) a single mode is prescribed or (ii) two modes are prescribed. In a 3D setting, one have to include also the third case when three modes are prescribed. The treatment of this case is nothing but a straightforward extension of the one needed when two modes are prescribed.
Single mode case. Let ( ω_k,ξ_k,𝐏_k) be an individual prescribed resonances modes at frequency ω_k, where ω_j
≠ω_k for j≠ k. The solution of equation (<ref>) can
be expressed in explicit form as
_k( 𝐫) = ε_0
_k( 𝐫) /ξ_k|𝐏_k( 𝐫) | ^2𝐏_k
^∗( 𝐫) +α_k( 𝐫)
𝐯_k( 𝐫) 𝐩_k^∗(
𝐫),
where ∗ is the complex conjugate operation, _k=ℰ( ω_k) 𝐏_k, 𝐩
_k( 𝐫) ⊥𝐏_k( 𝐫)
for almost everywhere (a.e.) 𝐫∈Ω [Here 𝐚( 𝐫
) 𝐛( 𝐫) means that 𝐚
^∗( 𝐫) ·𝐛( 𝐫)
=0.], 𝐯_k is an arbitrary vector field and α_k is an
arbitrary scalar field. The solution γ_k given in equation (<ref>) can be easily verified by
plugging it in equation (<ref>).
A possible choice for 𝐩_k is 𝐩_k
=ℛ𝐏_k^∗, being ℛ the 90
^∘ rotation operator in the counterclockwise direction. We notice that
ℛ𝐏_k^∗( 𝐫) =𝐏
_k^∗( 𝐫) ×𝐞_3 where 𝐞_3 is the unit vector along the x_3 direction.
Finally, we highlight that by means of the explicit solution of equation (<ref>) one can easily check if
_k is bounded or continuous. Specifically, we have that if _k and 𝐏_k are continuous (piecewise continuous) and
|_k|/|𝐏
_k| is bounded, then _k is continuous (piecewise continuous).
We conclude this Section with a remark about the scalar case.
When _k∥𝐏_k, i.e.
𝐏_k( 𝐫) =ε_0χ_k(
𝐫) _k( 𝐫) being
χ_k a scalar field, Eq. (<ref>) returns a scalar susceptibility tensor (homogeneous material):
_k=1/ξχ_kℐ,
where ℐ is the unit dyad. Indeed, Eq. (<ref>) follows from (<ref>) by choosing 𝐩_k( 𝐫) =𝐏_k^∗( 𝐫
) ×_3, 𝐯( 𝐫
) =𝐄_P^∗×_3, α
_k( 𝐫) =χ_k^∗( 𝐫)
/χ_k( 𝐫), and by observing that 𝐮𝐮
^∗+( 𝐮^∗×_3)
( 𝐮×_3) gives the (2D)
unit dyad ℐ when 𝐮 is an arbitrary unit vector. In this case, the prescribed mode is a material independent mode <cit.>.
Two isofrequential modes.
Let ω_1=ω_2≠ω_j for j>2, and ( ω_1,ξ_1,𝐏_1) and (
ω_2,ξ_2,𝐏_2) be the prescribed resonances modes. Let the solution be expressed as
_1 ( 𝐫)= ∑_l,m=1^2Γ_lm( 𝐫) 𝐔_l( 𝐫) 𝐏_m^∗( 𝐫).
where Γ_lm( 𝐫) ∈ℂ and
𝐔_l = ε_0ℰ (ω_1) 𝐏_l/ξ_l, l=1,2.
To find the unknown coefficients Γ_lm, we observe that by imposing Eq. (<ref>) on the two prescribed resonance modes we have:
𝐔_r ( 𝐫)= γ_1 ( 𝐫) ·𝐏_r ( 𝐫) for a.e. 𝐫∈Ω, and r=1,2.
Then, by left multiplying this expression by 𝐔^∗_s( 𝐫), we have
𝐔_s^∗·𝐔_t=∑_l,m=1^2(
𝐔_s^∗·𝐔_l) Γ_lm(
𝐏_m^∗·𝐏_t) in Ω, s,t=1,2,
which, in matrix form, gives
𝐆_U( 𝐫) = 𝐆_U( 𝐫) Γ( 𝐫) 𝐆_P( 𝐫),
where ( G_U) _st=𝐔_s^∗·𝐔_t,
( G_P) _ik=𝐏_i^∗·𝐏_k and
Γ is the matrix made by the unknown coefficients Γ_lm.
When both 𝐆_U and 𝐆_P are invertible at location 𝐫, the solution of (<ref>) exists, is unique and is given by
Γ( 𝐫) = 𝐆_P^-1( 𝐫).
In the remaining cases, i.e. 𝐆_P and/or 𝐆_U non invertible, the solution may not exist or be unique.
It is worth noting that matrices 𝐆_U and 𝐆_P are Gram matrices and, therefore, 𝐆_U=𝐆_U^†, 𝐆
_U≥ 0, 𝐆_P=𝐆_P^† and
𝐆_P≥ 0.
Moreover, the inverse of (<ref>) is (when it
exists)
χ=∑_l,m=1^2_ml^D𝐏_m𝐔_l^∗,
where ^D=( 𝐆_U^I
𝐆_P) ^-1.
§.§ Parameterization of the frequency response
Once the inverse of the susceptibility tensor is found at each each prescribed angular frequency
ω_k, we need to reconstruct the dispersion relation ( 𝐫,ω), which has to satisfy the causality throught the Kramers-Kronig conditions and the Hermitian symmetry, namely ( 𝐫,-ω)=^* ( 𝐫,ω). To this purpose, we parameterize the dispersion relation, as follows
( 𝐫,ω) =∑_m=1^M𝐚
_m( 𝐫) φ_m( ω)
where M is the number of terms, each expansion function φ_m is causal and Hermitian and each tensor field 𝐚_m is real. The φ_ms depend on the actual realization of the artificial material. A possible choice consists in assuming each expansion function φ_m of the
Lorentz-Drude type:
φ_m( ω) =ω_p,m^2/( ω_0,m
^2-ω^2) +jωβ_m,
where causality requires β_m>0.
Tensors fields 𝐚_ms can be found by point matching, for instance. Within this approach, we enforce the following constraints ∀ k =1, …, N
∑_m=1^M𝐚_m (𝐫) Re{φ_m( ω
_i)} = Re{γ_k^-1 (𝐫) }
,
∑_m=1^M𝐚_m (𝐫) Im{φ_m( ω
_i)} = Im{γ_m^-1 (𝐫) }
,
where Re{·} and Im{·} are the real and imaginary parts of their argument, respectively. Moreover, from (<ref>) and (<ref>), it follows that M=2 N to have existence and uniqueness of the solutions in terms of the unknown tensor fields 𝐚_ms.
We remark that parameters ω_p,m, ω_0,m and β_m depend on the actual realization of the artificial material. For instance, ω_0,m does not need to be equal to the resonant (angular) frequency ω_m prescribed for the Synthesis of the Modes. In the remaining of the paper we select parameters ω_p,m, ω_0,m and β_m, to avoid the appearance of any resonance due to the expansion functions, at the resonant frequencies prescribed for the Synthesis of the Modes.
§ TUNABILITY AND ESSENTIAL MODES
The tunability of the resonance refers to the possibility of changing the properties of a material in a controlled manner. The Synthesis of Modes entails tunability in a natural manner via the material eigenvalues ξ_k.
Indeed, after (<ref>), we have that a material with dielectric permittivity given by χ / ξ_k, being χ the result of the synthesis of modes, resonates at the angular frequency given by ω_k. In other terms, we can control the frequency behaviour of a material (value of the frequency resonances and spatial distribution of the related mode), by simply scaling χ by a proper factor.
From another perspective, the proposed approach to the synthesis of the modes allows to get the resonance frequencies and related spatial modes as a function of an individual parameter: a scaling factor in front of the synthesized χ.
This feature open the door to a systematic design of reconfigurable materials.
The concept of essential modes refers to the maximum number of modes that can be arbitrarily prescribed at a given angular frequency ω_k. Equation (<ref>) provide the values of the Γ_lm giving the sought inverse of the dielectric susceptivity tensor in (<ref>). This equation shed light on a special and not obvious physical feature of the modes: two modes are capable of defining uniquely the material property of the scatterer, at the prescribed angular frequency. In other words, γ(·,ω_k) is in a one-to-one correspondence with two of its modes at ω_k, at ω_k. From another perspective, only two modes can be assigned in a completely independent manner or, equivalently, all the modes depend upon two arbitrarily selected modes, at a prescribed angular frequency.
We term two arbitrary modes in a one-to-one correspondence with χ(·,ω_k) as essential modes.
It is worth noting that he number of essential modes is two in a 2D problem and three in a 3D problem.
§ APPLICATION OF THE THEORY OF SYNTHESIS OF MODES
In this Section, we show the effectiveness of the resonance synthesis method by means of three application examples. We demonstrate (i) the capability of the method to synthesise several modes, each one having prescribed polarization density distribution at prescribed frequencies, (ii) the tunability of resonant response, by a proper scaling of the dielectric susceptibility tensor and (iii) the concept of essential modes. In the first two examples, the reference geometry is an indefinite cylinder with square L× L cross-section with L=10cm under the illumination. In the third example the geometry consist of coated spherical gold nanoparticle.
The numerical model for solving the electromagnetic problem is derived from Ref. <cit.>. The parameters of the Lorentz-Drude expansion functions φ_k, introduced in Eq. (<ref>), are given in Table <ref>. The plot of each individual expansion function is shown in Figure <ref>. The positions of the peaks of the expansion function are uniformly spaced over the bandwidth of interest. We assume ω_p,k=ω_0,k and β_k=0.1ω_0,k. With this latter choice, each expansion function is localized in a neighborhood of its peak position, but does not present a sharp resonance that could hide those arising from the Synthesis of Modes. The amplitude and the shape of the expansion function are briefly discussed in Appendix <ref>.
Synthesis of the modes. In this first application, we prescribe the modes at the three angular frequencies shown in Table <ref>. Specifically, at the angular frequency ω_1 we prescribe two modes: the first one has a polarization density field 𝐏_0, whose shape resembles the number “0" and it is associated with the eigenvalue ξ_A=1; the second mode has a polarization density field 𝐏_1, whose shape resembles the number “1" and it is associated with the eigenvalue ξ_B=2. At the angular frequency ω_2, we prescribe the modes 𝐏_1 and 𝐏_2, where 𝐏_2 has a shape which resembles number 2. Modes 𝐏_1 and 𝐏_2 are associated with eigenvalues ξ_A=1 and ξ_B=2, respectively. Finally, at the angular frequency ω_3 we prescribe modes 𝐏_2 and 𝐏_0, associated with eigenvalues ξ_A=1 and ξ_B=2, respectively. Tables <ref> and <ref> summarize these choices.
The synthesis is carried out in two steps: i) we evaluate γ_i ( 𝐫) at the three prescribed frequencies; ii) we interpolate the corresponding dielectric susceptibility as in Eq. (<ref>), by solving (<ref>) and (<ref>).
In the first step, the theory for the synthesis of two isofrequential modes
is applied at each individual angular frequency using equation (<ref>): (i) for (ω_1, ξ_A, 𝐏_0) and (ω_1, ξ_B, 𝐏_1) at ω_1, (ii) for (ω_2, ξ_A, 𝐏_1) and (ω_2, ξ_B, 𝐏_2) at ω_2 and (iii) for (ω_3, ξ_A, 𝐏_2) and (ω_3, ξ_B, 𝐏_0) at ω_3.
Figures <ref>, <ref>, and <ref> show the real and imaginary part of every element of the relative dielectric permittivity tensor ε_R,k=χ_k+1, at ω_1, ω_2, and ω_3, respectively.
To validate the proposed method, we performed two tests, where the dielectric susceptibility profile is either χ^𝙰 ( 𝐫, ω ) = χ ( 𝐫, ω ) / ξ^𝙰 or χ^𝙱 ( 𝐫, ω ) = χ ( 𝐫, ω ) / ξ^𝙱, where χ ( 𝐫, ω ) is the outcome of the synthesis of modes.
The first test was a direct test and it consisted in i) computing the modes at the three frequencies and in ii) comparing them with the prescribed polarization density field. This test was passed successfully.
As second test, we evaluate the induced polarization density fields at the three frequencies ω_1, ω_2, and ω_3, when the cylinder is excited by a linearly polarized plane wave, propagating along the horizontal axis. These polarization fields are showed in Fig. <ref> (e-c) assuming a susceptibility tensor χ^𝙰(𝐫,ω) and in Fig. <ref> (d-f) for χ^𝙱.
The induced polarization density fields is very close to the prescribed modes. In quantitative terms, Table <ref>, shows the 2-norm of the relative difference between the actual 𝐏 and its projection along the subspaces generated by the prescribed modes, at each specific angular frequency:
ρ_k^i = ‖𝐏_i( ·,ω_k) -Π^i_k𝐏_i( ·, ω_k) ‖/‖𝐏_i( ·, ω_k ) ‖
with k=1,2,3 and i=𝙰,𝙱. In (<ref>), 𝐏_𝙰( ·,ω_k) and 𝐏_B ( ·,ω_k) are the polarization vectors at ω_k and for material 𝙰 and 𝙱, Π^𝙰_k and Π^𝙱_k are the projector into the linear space for the modes at the k-th angular frequency ω_k and for material 𝙰 and 𝙱. The detail about projectors Π^𝙰_ks and Π^𝙱_ks is given in Table <ref>.
We stress that
𝐏_i ( ·, ω_k) is the polarization vector for the
physical system under the prescribed illumination at ω_k.
This example clearly illustrates the concept of tunability of the resonant response: by just uniformly halving the value of the susceptibility distribution (passing from χ^𝙰 to χ^𝙱) the resonance modes in correspondence of the peaks change from the ordered sequence 0, 1, 2to 1, 2, 0.
Tunability. In this second application we determine the dielectric susceptibility by synthesizing at the frequency ω_1 the degenerate modes 𝐏_ and 𝐏_∨, whose polarization density field distribution resembles the characters and ∨, respectively; and at ω_2 the degenerate modes 𝐏_- and 𝐏_|, whose prescribed field distribution resembles the characters - and |, respectively. To validate the performed synthesis, we excite the infinite cylinder with a plane wave polarized along (𝐞_1+𝐞_2)/√(2). We show the real and imaginary part of the induced polarization field distributions at ω_1 in Figures <ref>(c), (d), and in Figures <ref>(g), (h) at ω_2. It is immediately apparent that at ω_1 the induced polarization field is a linear combination of the two prescribed degenerated modes 𝐏_ and 𝐏_∨, while at ω_1 the induced polarization field is a linear combination of 𝐏_- and 𝐏_|. From the quantitative perspective, the 2-norm relative difference ρ between the actual 𝐏 and its projection along the subspaces generated by the prescribed degenerated modes, is equal at 2.9908 × 10^-2 at ω_1 and 3.5310 × 10^-2 at ω_2. In this case Π_1 projects onto {𝐏_, 𝐏_∨}, whereas Π_2 projects onto {𝐏_-, 𝐏_| }.
Essential modes.
This final application case demonstrates a key feature of the Theory of the Synthesis of Modes, i.e. the concept of Essential Modes.
Specifically, given a scatterer operated at a prescribed angular frequency ω_1 and described by the dielectric susceptivity tensor χ(·,ω_1), we compute two resonance modes (ω_1, ξ_A,𝐏_A) and (ω_1, ξ_B,𝐏_B) and, then, we apply our Theory of the Synthesis to these modes. Since the tensor of the dielectric permittivity is in an one-to-one correspondence with two arbitrary modes, as discussed in a previous Section, we expect that the tensor χ_s(·,ω_1) of the dielectric permittivity synthesized by means of (ω_1, ξ_A,𝐏_A) and (ω_1, ξ_B,𝐏_B) via (<ref>), is equal to χ(·,ω_1).
The scatterer of this example consists of a coated (thickness 100 nm) circular (radius 200 nm) gold nanorod operated at f=500 THz (ω_1=π× 10^15 rad/s, free-space wavelength of 600 nm). The relative dielectric permittivity of the gold nanoparticle is 9.44-j 1.51, whereas that of the coating is 4.
Figures <ref> and <ref> show the real and imaginary parts for the selected modes 𝐏_A and 𝐏_B. The synthesized dielectric permittivity tensor is almost equal to that of the prescribed scatterer. As a figure of merit we evaluated the maximum relative error over the scatterer domain Ω:
e=max_𝐫∈Ω||χ(𝐫,ω_1)-χ_s(𝐫,ω_1)||_2/||χ(𝐫,ω_1)||_2,
which, in this case, is equal to 3.3 × 10^-11. In (<ref>) χ is the prescribed tensor of the dielectric susceptibility, whereas χ_s is the tensor of the synthesized dielectric susceptibility.
§ CONCLUSIONS
In this work we introduced a theoretical framework to find the permittivity profile of a dielectric object to synthesize at will its resonant modes. Specifically, we are able to control the spatial distribution of the polarization density field and the resonance frequency of a set of modes. The equations for the synthesis are straightforward and in an explicit form, making them suitable for specific customization. Moreover, we can prescribe the modes at many different frequencies.
The only limit, arising from the underlying physics, consists in the possibility of assigning at most two modes to each individual frequency and eigenvalue (up to three modes in a 3D setting). Indeed, from the theory of the synthesis of modes arises naturally that, at a prescribed angular frequency, the dielectric susceptivity tensor is in one-to-one correspondence with two of its modes, that we termed as essential modes.
We also demonstrated the concept of tunability: the proposed approach enables the design of the permittivity of a dielectric object that not only allows the synthesis at will of its resonant modes, but also allows to changes the resonant modes of the dielectric object in a controlled manner, by multiplying the designed permittivity by a proper multiplicative factor.
We also demonstrated the concept of tunability: our approach enables the design of the permittivity of a dielectric object, that not only allows the synthesis at will its scattering resonances, but also allows when such permittivity is multiplied by a proper multiplicative factor, it changes its resonant behaviour in a controlled manner. This is relevant from the practical point of view because this operation (multiplication by a constant) appears to be a simple operation.
With this theoretical framework, future development will be aimed to design a real world material approximating the synthesized dielectric susceptibility. Metamaterials are the natural candidates to this purpose.
The method introduced can be transplanted to different linear physical systems, where the constitutive relationship is linear and local, including thermal and mechanical systems.
§ METHODS
All the numerical calculations have been carried out by using the numerical method of <cit.>. All the value of the parameters used for generating numerical results have been included into the article.
§ DATA AVAILABILITY
All the data supporting the conclusions of this study are included in the
article. Source data are provided with this paper.
§ CODE AVAILABILITY
The computer code and algorithm that support the findings of this
study are available from the corresponding author on request.
§ GREEN FUNCTION
The component of the Green function for the illumination are
G_11( 𝐫) =-ζ_0/4r^3[
krx_2^2H_0( kr) +( x_1^2-x_2^2)
H_1( kr) ]
G_12( 𝐫) =-ζ_0/4r^3x_1
x_2[ 2H_1( kr) -krH_0( kr) ]
G_21( 𝐫) =G_12( 𝐫)
G_22( 𝐫) =-ζ_0/4r^3[
krx_1^2H_0( kr) +( x_2^2-x_1^2)
H_1( kr) ] ,
being ζ_0 the characteristic impedance of vacuum, k=ω/c_0 the
wavenumber, and c_0 the speed of light in vacuum.
§ LORENTZ-DRUDE EXPANSION FUNCTION
The (normalized) amplitude of the elementary Lorentz-Drude expansion function is:
| φ (ω) |/( ω_p / ω_0)^2 =1/√([ 1 - ( ω/ω_0)^2]^2 +( ω/ω_0)^2 ( β/ω_0)^2).
Its maximum value is
| φ (ω) |_max/( ω_p / ω_0)^2 =1/β/ω_0√(1 + 3/4( β/ω_0)).
and it is achieved at
ω/ω_0 = √(1+1/2( β/ω_0)^2)
The plot of (<ref>) for different β / ω_0 ratios is showed in Figure <ref>.
10
engheta_metamaterials_2006
N. Engheta and R. W. Ziolkowski, Metamaterials: Physics and
Engineering Explorations.
John Wiley & Sons, June 2006.
pendry_controlling_2006
J. B. Pendry, D. Schurig, and D. R. Smith, “Controlling Electromagnetic
Fields,” Science, vol. 312, no. 5781, pp. 1780–1782, 2006.
leonhardt_optical_2006
U. Leonhardt, “Optical Conformal Mapping,” Science, vol. 312,
no. 5781, pp. 1777–1780, 2006.
hughes_adjoint_2018
T. W. Hughes, M. Minkov, I. A. D. Williamson, and S. Fan, “Adjoint Method
and Inverse Design for Nonlinear Nanophotonic Devices,” ACS
Photonics, vol. 5, pp. 4781–4787, Dec. 2018.
Publisher: American Chemical Society.
yao_intelligent_2019
K. Yao, R. Unni, and Y. Zheng, “Intelligent nanophotonics: merging photonics
and artificial intelligence at the nanoscale,” Nanophotonics, vol. 8,
pp. 339–366, Jan. 2019.
lalanne_light_2018
P. Lalanne, W. Yan, K. Vynck, C. Sauvan, and J. . P. Hugonin, “Light
interaction with photonic and plasmonic resonances,” Laser & Photonics
Rev., vol. 12, 2018.
van_bladel_electromagnetic_2007
J. G. Van Bladel, Electromagnetic fields, vol. 19.
John Wiley & Sons, 2007.
kristensen_modes_2013
P. T. Kristensen and S. Hughes, “Modes and mode volumes of leaky optical
cavities and plasmonic nanoresonators,” ACS Photonics, vol. 1, 2013.
muljarov_brillouin-wigner_2010
E. A. Muljarov, W. Langbein, and R. Zimmermann, “Brillouin-Wigner
perturbation theory in open electromagnetic systems,” EPL (Europhysics
Letters), vol. 92, p. 50010, Dec. 2010.
Publisher: IOP Publishing.
lalanne_quasinormal_2019
P. Lalanne, W. Yan, A. Gras, C. Sauvan, J.-P. Hugonin, M. Besbes, G. Demésy,
M. D. Truong, B. Gralak, F. Zolla, A. Nicolet, F. Binkowski, L. Zschiedrich,
S. Burger, J. Zimmerling, R. Remis, P. Urbach, H. T. Liu, and T. Weiss,
“Quasinormal mode solvers for resonators with dispersive materials,” JOSA A, vol. 36, pp. 686–704, Apr. 2019.
kristensen_generalized_2012
P. T. Kristensen, C. V. Vlack, and S. Hughes, “Generalized effective mode
volume for leaky optical cavities,” Optics Letters, vol. 37,
pp. 1649–1651, May 2012.
sauvan_theory_2013
C. Sauvan, J.-P. Hugonin, I. Maksymov, and P. Lalanne, “Theory of the
spontaneous optical emission of nanosize photonic and plasmon resonators,”
Physical Review Letters, vol. 110, no. 23, p. 237401, 2013.
Publisher: APS.
muljarov_exact_2016
E. A. Muljarov and W. Langbein, “Exact mode volume and Purcell factor of
open optical systems,” Physical Review B, vol. 94, p. 235438, Dec.
2016.
Publisher: American Physical Society.
bergman_theory_1980
D. J. Bergman and D. Stroud, “Theory of resonances in the electromagnetic
scattering by macroscopic bodies,” Phys. Rev. B, vol. 22, 1980.
forestiere_material-independent_2016
C. Forestiere and G. Miano, “Material-independent modes for electromagnetic
scattering,” Phys. Rev. B, vol. 94, p. 201406, Nov. 2016.
forestiere_volume_2018
C. Forestiere, G. Miano, G. Rubinacci, A. Tamburrino, R. Tricarico, and
S. Ventre, “Volume Integral Formulation for the Calculation of
Material Independent Modes of Dielectric Scatterers,” IEEE
Transactions on Antennas and Propagation, vol. 66, pp. 2505–2514, May 2018.
pascale_full-wave_2019
M. Pascale, G. Miano, R. Tricarico, and C. Forestiere, “Full-wave
electromagnetic modes and hybridization in nanoparticle dimers,” Scientific Reports, vol. 9, p. 14524, Oct. 2019.
forestiere_nanoparticle_2017
C. Forestiere and G. Miano, “On the nanoparticle resonances in the
full-retarded regime,” Journal of Optics, vol. 19, p. 075601, June
2017.
pascale_spectral_2017
M. Pascale, G. Miano, and C. Forestiere, “Spectral theory of electromagnetic
scattering by a coated sphere,” JOSA B, vol. 34, pp. 1524–1535, July
2017.
forestiere_directional_2019
C. Forestiere, G. Miano, M. Pascale, and R. Tricarico, “Directional scattering
cancellation for an electrically large dielectric sphere,” Optics
Letters, vol. 44, pp. 1972–1975, Apr. 2019.
su_monotonicity_2017
Z. Su, S. Ventre, L. Udpa, and A. Tamburrino, “Monotonicity based imaging
method for time-domain eddy current problems,” Inverse Problems,
vol. 33, p. 125007, Nov. 2017.
tamburrino_monotonicity_2021
A. Tamburrino, G. Piscitelli, and Z. Zhou, “The monotonicity principle for
magnetic induction tomography,” Inverse Problems, vol. 37, p. 095003,
Aug. 2021.
Publisher: IOP Publishing.
Note1
Here 𝐚 ( 𝐫 ) 𝐛 ( 𝐫 ) means that 𝐚^∗ ( 𝐫 ) ·𝐛 ( 𝐫 ) =0.
richmond_te-wave_1966
J. Richmond, “TE-wave scattering by a dielectric cylinder of arbitrary
cross-section shape,” IEEE Transactions on Antennas and Propagation,
vol. 14, pp. 460–464, July 1966.
|
http://arxiv.org/abs/2307.07536v1 | 20230714120857 | Relaxation of experimental parameters in a Quantum-Gravity Induced Entanglement of Masses Protocol using electromagnetic screening | [
"Martine Schut",
"Alexey Grinin",
"Andrew Dana",
"Sougato Bose",
"Andrew Geraci",
"Anupam Mazumdar"
] | quant-ph | [
"quant-ph",
"gr-qc"
] |
Van Swinderen Institute for Particle Physics and Gravity, University of Groningen, 9747AG Groningen, the Netherlands
Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, University of Groningen, 9747 AG Groningen, the Netherlands
Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL
Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL
Department of Physics and Astronomy, University College London, London WC1E 6BT, United Kingdom
Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL
Van Swinderen Institute for Particle Physics and Gravity, University of Groningen, 9747AG Groningen, the Netherlands
To test the quantum nature of gravity in a lab requires witnessing the entanglement between the two test masses (nano-crystals) solely due to the gravitational interaction kept at a distance in a spatial superposition. The protocol is known as the quantum gravity-induced entanglement of masses (QGEM).
One of the main backgrounds in the QGEM experiment is electromagnetic (EM) induced entanglement and decoherence. The EM interactions can entangle the two neutral masses via dipole-dipole vacuum-induced interactions, such as the Casimir-Polder interaction. To mitigate the EM-induced interactions between the two nano-crystals, we enclose the two interferometers in a Faraday cage and separate them by a conducting plate. However, any imperfection on the surface of a nano-crystal, such as a permanent dipole moment will also create an EM background interacting with the conducting plate in the experimental box. These interactions will further generate EM-induced dephasing which we wish to mitigate. In this paper, we will consider a parallel configuration of the QGEM experiment, where we will estimate the EM-induced dephasing rate, run-by-run systematic errors which will induce dephasing, and also provide constraints on the size of the superposition in a model-independent way of creating the spatial superposition.
Relaxation of experimental parameters in a Quantum-Gravity Induced Entanglement of Masses Protocol using electromagnetic screening
Anupam Mazumdar
August 12, 2023
=======================================================================================================================================
§ INTRODUCTION
Quantum spatial superposition and entanglement <cit.> are the two key tools to test the quantum nature of gravity in a laboratory <cit.>, see also <cit.> [The results of <cit.> were first reported in a conference talk <cit.>.]. Both tools are inherently quantum in nature and there are no classical counterparts to them. In <cit.>, it was pointed out that the two masses of order m∼ 10^-14 kg if kept in a spatial superposition of order 100 μ m, separated at a distance of 450 μ m for a time τ∼ 1-2s will entangle them gravitationally, sufficiently to be detectable via entanglement witness <cit.>, if gravity is inherently a quantum entity where the gravitational interactions are treated locally and the pillars of relativistic quantum field theory is maintained <cit.> [One can introduce non-local gravitational interaction and can compute entanglement witness, see <cit.>, but the non-local interaction is introduced at the level of Lagrangian in a very specific non-local, infinite derivative theories of gravity <cit.>, such non-locality can be perceived arising from string theory, see e.g., <cit.> or string field theory <cit.>.].The protocol introduced in <cit.> is known as quantum gravity-induced entanglement of masses (QGEM).
This simple but potent observation made in <cit.> also supports the principle of local operation and classical communication (LOCC) <cit.>, which states that if two quantum systems if they were pure, to begin with then classical communication would not entangle them at all. One would require a quantum communication/mediator to entangle the two quantum systems <cit.>. By classical communication, we mean that there is a classical probability associated with the local operations, e.g. Unitary operations, but no Hilbert state associated to classical operators. Therefore, irrespective of formal aspects of quantum gravity or its ultraviolet challenges <cit.>, i.e. however way we may quantize gravity if gravity is a quantum entity it would entangle the two spatially superposed masses even at the lowest order in the potential. In fact, Newtonian potential has no ħ, and despite this witnessing entanglement between the two superposed masses holds the key to unveiling the quantum nature of gravitational interaction.
This observation is similar in spirit to Bell's test of quantum nonlocality that even if ħ→ 0 the quantum correlation does not vanish <cit.>, first observed in the case of large spins violating Bell's inequality, see Refs. <cit.>. The QGEM protocol is precisely based on witnessing this correlation via spin entanglement. The idea is to embed spins into the quantum system, such as nitrogen-vacancy (NV) spin in the diamond-type crystal <cit.> and use the spin to create a macroscopic quantum superposition such as in the Stern-Gerlach (SG) apparatus <cit.> and close the one-loop interferometer to measure the interference between the two paths, and hence build spin-spin correlations, e.g, entanglement witness, between the two adjacent interferometers.
Naturally, witnessing the entanglement will be extremely challenging, there are many sources of noise and one particular noise is indeed induced by the electromagnetic (EM) interactions in the neighborhood of the nanocrystals <cit.>, decoherence due to heating of the crystal, blackbody emission/absorption, scattering of the ambient quanta <cit.> motivated from <cit.>. There are also external jitters due to gas molecules, gravity-induced noise such as gravity gradient noise, and relative acceleration noise <cit.>, and dephasing due to heavy massive objects near the experiment, e.g., cryogenics and vacuum pump <cit.>.
In this paper, we will focus on the EM-induced noise. In particular, we will discuss the situation as first proposed in Ref. <cit.> where a conducting plate is placed between the two test masses to shield the test masses from interacting vacuum-induced dipole-dipole interaction, e.g., Casimir-Polder potential <cit.>. However, we consider a different geometrical configuration of test masses, namely the test masses are in the `parallel' rather than the `linear' configuration. The reason for selecting such a configuration is to maximize the entanglement phase, and hence the entanglement witness, see <cit.>. Here, we will also consider a wider range of relevant effects, especially focusing on the induced-electric dipole moment, and the dipole moment on the surface of the crystal <cit.>. Besides these, there are also common mode fluctuations between the two halves of the interferometers. Here also, we will assume that the two halves of the experiment, the two interferometers are separated by a conducting plate to minimize the EM-induced interactions between the nano-crystals, first proposed in <cit.>. We will analyse
various sources of dephasing, namely the fluctuation in the paths of the interferometer, run-to-run fluctuations in releasing the nano-crystal's position, fluctuations in the magnetic field, or fluctuations in the conducting plate. All these fluctuations will manifest in some decoherence, in the sense that they will affect the global phase of the density matrix of the two interferometers, which we wish to optimize for the QGEM experiment.
In sec. <ref>, we explain why the parallel configuration and the introduction of a conducting plate could be beneficial in terms of witnessing the entanglement.
Then we add a conducting plate to the setup (sec. <ref>) and discuss the change in the free-fall trajectories of the test masses due to the Casimir-and dipole-interaction between the test masses and the plate (secs. <ref> and <ref>, respectively).
Due to the test masses' trajectories and therefore the accumulated phase being dependent on the initial separation of the test masses to the plate, small fluctuations in the creation of the initial state (such as the distances to the plate, the magnetic gradient, and the dipole orientation) will result in phase fluctuations at the measurement stage. This is discussed in sec. <ref>.
Furthermore, an initial setup that is not perfectly symmetric can cause a deflection in the conducting plate and consequently cause dephasing in the superpositions due to the plate.
An estimation of the coherence time due to dephasing from the deflection of the plate, and thermal fluctuations of the plate, is given in sec. <ref>.
We conclude by showing the relaxed experimental parameters of the new setup needed to witness the entanglement.
§ PARALLEL AND LINEAR SETUPS
We will consider a nano-crystal with a spin, in fact, our discussion is very generic and can be applied to many dopants. As an example, we may consider diamond like system with one NV-center, we will assume that the crystal is a sphere and the NV is at the center, for a review on NV-center diamond, see <cit.>.
We will also assume that the crystal is charged neutral, we will take up the case when there are surface dipoles separately <cit.>.
We will initiate the spin in a superposition state, see below, and let the crystal pass through an inhomogeneous magnetic field of the SG setup, the spin superposition allows for the creation of a spatial superposition, see <cit.>. There are many schemes to create the spatial superposition, but here we will consider a simple setup where we will take into account of three steps, (1) acceleration of the crystal, τ_a, (2) intermediate phase when the crystal is not experiencing any SG force, and (3) the last phase of recombining the trajectories of spin-up and down.
We distinguish the following two setups that correspond to two different configurations.
* Parallel setup: The direction in which this superposition is created is such that the two superpositions are parallel, as depicted in figure <ref>. It was first considered in <cit.> and further studied in <cit.> including the effects of decoherence.
* linear setup: The superpositions are kept adjacent to each other, see figure <ref>.
In this paper we will focus on the parallel setup, the aim of this section is to show that the parallel results in a larger effective entanglement phase within the first ∼5 of the experiment.
We briefly review the QGEM protocol for the parallel setup first, a similar analysis on the linear setup has been done earlier, see <cit.>. We will assume that the initial state of the combined system of the two crystals (labeled system 1 and system 2) is that of a pure state, and are represented by a spatial superposition of 0 (spin down) and 1 (spin up):
|Ψ_0⟩
=
1/2⊗_i=1^2( |0⟩_i + |1⟩_i ) .
After creating a spatial superposition, holding it for
a time τ and then recombining the superposition states, the final state is entangled via the quantum nature of gravity, and the final state is given by:
|Ψ(t=τ)⟩
=
e^i ϕ/2( |0 0⟩ + e^i Δϕ|0 1⟩ + e^i Δϕ|1 0⟩ + |1 1⟩) ,
with [Actually, since Δ x is time-dependent this should be integrated over time. However, for the purpose of this section, we can consider only the entanglement phase generated in a time τ while Δ x is constant.]
ϕ = G m^2/dτ/ħ , Δϕ = G m^2/√(d^2+(Δ x)^2)τ/ħ - ϕ ,
where m is the mass of the test masses, d and Δ x are as defined in figure <ref>, G is Newton's gravitational constant, ħ is the reduced Planck's constant.
As long as 2Δϕ≠ 2π k (k∈ℤ), the state is non-separable (from Plücker's relation <cit.>, or can be seen explicitly in the context of a perturbation theory in quantum mechanics <cit.>), which means the test masses are entangled, with the maximum entanglement at Δϕ = π/2.
We define therefore the effective entanglement phase as: [
In the linear configuration, this effective entanglement phase is:
Φ_eff = G m^2/d+Δ xτ/ħ + G m^2/d-Δ xτ/ħ - 2 ϕ .
]
Φ_eff=2Δϕ .
Requiring a minimal effective phase, say Φ_eff∼ O(1) determines the experimental parameters such as the mass m and the distance d.
Minimizing the separation would increase the effective phase, but there is a minimal distance d_min between any two superposition instances required such that the Casimir-Polder-induced entanglement phase is sub-dominant compared to the gravitationally-induced entanglement phase.
The Casimir-Polder potential between the two neutral dielectric masses is <cit.>:
V_CP = - 23ħ c/4πR^6/d^7(ε-1/ε+2)^2 ,
with ε the dielectric constant of the test mass, r the separation of the two states, and R the radius of the test mass.
Comparing the gravitational - and Casimir-Polder interactions which go as ∼1/r and ∼1/r^7, respectively, the Casimir-Polder interaction dominates at short separations.
From the condition that the gravitational potential is at least one order of magnitude larger than the Casimir-Polder potential, we find a minimal distance (assuming that the test masses are perfect spheres, ρ=3m/4π R^3) <cit.>:
d ≥[ 230/4πħ c/G(3/4πρε-1/ε+2)^2]^1/6≡ d_min≈ 157 μm ,
where ρ=3.5 / (the density of diamond), ε=5.1 (the dielectric constant of diamond), and c the speed of light.
For this minimal distance, we compare the effective entanglement phase for the parallel setup (eq. (<ref>)), with the effective phase from the linear setup (eq. (<ref>)) which was discussed in <cit.>.
The comparison is given in figure <ref>.
The figure shows that for this minimal distance (which is determined by the material properties), the parallel configuration typically generates a larger effective phase within 1 of interaction, independent of mass <cit.>.
Besides the generated entanglement, dephasing via the Casimir interaction also plays a role.
For example, if there is a different Casimir interaction for the two superposition instances between the particles and the conducting plate that will be introduced in the next section.
This specific type of dephasing will be discussed in section <ref>.
The dephasing due to a general effect can be characterized by the dephasing rate γ_d.
The final wavefunction presented in eq. (<ref>) including dephasing is [Assuming some symmetries in the setup, the equation is derived in more detail in section <ref>.]:
|Ψ(t=τ)⟩
=
e^i ϕ̃/2( |0 0⟩ + e^i Δϕ - i γ_d τ|0 1⟩
+ e^i Δϕ|1 0⟩ + e^-iγ_d τ|1 1⟩) .
§ CONDUCTING PLATE
The previous section shows a parallel orientation of the superposition resulting in a larger entanglement signal compared to a linear configuration, within the first few seconds of the experiment.
However, Ref. <cit.> suggested the placement of a conducting plate between the two superpositions, which shields the Casimir-Polder interaction and electric field between the two superpositions, allows for a smaller minimal distance (resulting in a higher entanglement within one second of the total experimental time).
This relaxes the allowed experimental parameters needed to achieve Φ_eff∼ O (1).
Based on figure <ref> we expect that introducing a conducting plate in the parallel configuration will further aid the parameters.
To analyze the parallel configuration, We introduce a perfectly conducting and reflective plate of thickness W at a distance z from both test masses, see figure <ref>.
The plate is assumed to be grounded and is clamped in the x-direction with the experimental capsule.
We furthermore assume that the Faraday cage which encloses the experiment is free falling and that the plate and test masses are also in free fall.
The conducting plate will screen the electromagnetic interactions (such as the Casimir-Polder or dipole-dipole interaction) between the two test masses, allowing the minimal distance d_min to be smaller than in the absence of this plate. However, the conducting plate will interact with the two test masses individually, and hence the resulting force will alter the trajectory of the superposition states by accelerating them toward the plate.
This will modify the distance to the plate, z(t), over time.
We consider here the Casimir-Polder and the dipole interaction between a dielectric sphere and a conducting plate and show the required initial separation z(0)=d and accumulated effective entanglement phase Φ_acc during the experimental time.
Due to the acceleration a=F/m induced by the Casimir/dipole force between the crystal and the conducting plate, within an infinitesimal time period δ t, the change in position of the superposition is modified by a small amount.
Assuming that the acceleration a is constant during an infinitesimal time δ t, the position at z(t+δ t) is:
z(t+δ t) = z(t) - 1/2 a(t) · (δ t)^2 - v_0(t) ·δ t ,
with v_0(t+δ t) = v_0(t) + a(t) ·δ t the velocity, with initial conditions v_0(0)=0, z(0)=d.
The effective phase accumulated during an infinitesimal time period δ t is given by:
Φ_eff(δ t) = 2Gm^2δ t/ħ[ 1/√([2z(t+δ t)+W]^2+(Δ x)^2)
- 1/2z(t+δ t)+W] .
The total accumulated effective phase can be found by integrating the infinitesimal effective phase over time.
Φ_acc = ∫_0^τtδΦ_eff/δ t .
Rather than solving this integral analytically, we find the total accumulated phase numerically, by adding the values of eq. (<ref>) over time and updating the position at every time step using eq. (<ref>)
[The python code has been made public on https://github.com/MartineSchut/QGEMShieldinghttps://github.com/MartineSchut/QGEMShielding.].
So far we have considered only the time period when we can neglect the SG force on the crystal, but we should also include the accumulated phase and the displacement during the creation and recombination of the spatial superpositions.
The superpositions are created by applying a magnetic gradient ∂_z B, resulting in the acceleration:
a_m = g μ_B ∂_z B ,
with the electronic g-factor of the NV-centre g∼ 2 and μ_B the Bohr magneton. The recombination of the spatial superpositions is achieved by simply reversing the direction of the magnetic gradient.
The direction of the magnetic acceleration a_m is in the x-direction and thus perpendicular to the acceleration towards the plate.
In the parallel configuration, the Casimir and the dipole interactions will not influence the superposition width. The spin-dependent force will be acting along the x-direction only.
The displacement due to the SG force, e.g., ∂ B, is related to the superposition width, and after a time τ_a will be given by:
Δ x = g μ_B ∂_z B/2 mτ_a^2 .
Of course, we are considering a very simple model for creating the superposition, this analysis will give us some idea of how and when the actual experimental setup will be conceived. A more comprehensive protocol for creating a large superposition size within a short time period is discussed in <cit.>.
The displacement and the infinitesimal phase during the creation/recombination of the spatial superpositions can be found also in Eqs. (<ref>) and (<ref>), respectively, but with Δ x time-dependent and given by eq. (<ref>).
We first analyze the Casimir-Polder interaction between the conducting plate and the nano-crystal, and then study the surface dipole.
§.§ Casimir Force
The Casimir force will act between a diamond-like crystal and the conducting plate. The force between a static plate and a free dielectric sphere is given by [
The potential between a dielectric sphere and a perfectly reflective plate was derived by Casimir and Polder in <cit.>. In the limit where the separation z is much larger than the wavelength of the electromagnetic field <cit.>:
V_CP = - 3 ħ c α/8π z^4
Note that this differs from eq. (<ref>) which is the potential between two dielectric spheres rather than the potential between one dielectric sphere and a perfectly reflective plate of eq. (<ref>).
The force due to this interaction was found in <cit.>, and can be seen to be F_CP = - ∂_z V_CP.
We take the complex polarizability of the dielectric sphere, α=R^3(ε-1)/(ε+2).
Here we have assumed that the dielectric properties of the test masses are independent of the frequency of the electric field and that its imaginary part is negligible at low temperatures (see experimental findings in <cit.>).
]: <cit.>
F_c = - 3 ħ c/2π(ε-1/ε+2) 3 m/4 πρ1/z^5 ,
where z(t) is the distance between the plate and the superposition at a given instance of time. We have assumed the crystal to be perfectly spherical, with ρ as the density of the spherical masses (4π/3R^3 ρ = m).
From eq. (<ref>), we can find the position of the dielectric sphere due to the Casimir force-induced displacement via numerical integration. Several methods could be used, we have provided our code on GitHub.
Fig. <ref> shows the trajectory z(t) of a single superposition as a function of time, for different initial values of z(0) = d.
Note that this is plot is mass-independent because the mass cancels out in the acceleration.
The closer the initial separation to the plate, the larger the deflection of the trajectory.
At a separation of around z(t)∼15 the trajectory is approximately constant.
§.§ Dipole Force
Either due to an external electric field or due to the impurities on the surface of the diamond, the test masses can have some induced or internal dipole moment, respectively <cit.>.
Using the image method we can find the electric field from the dipole at the surface of the conducting plate. The resulting potential is given by:
V_D = - p⃗_1 ·E⃗_2 = - p_1 ( 3(p⃗_2·r⃗)r⃗/r^5 - p⃗_2/r^3) 1/4πε_0 ,
with p⃗_1 (p⃗_2) the dipole moment of the test mass (image test mass) and r⃗ the radius vector between the two (image) dipoles.
E⃗_2 is the electric field due to the image dipole p⃗_2 <cit.>.
The force between the plate and the test mass is then: <cit.>
F_D = 1/4πε_03p^2/16z^4[1+cos^2(θ)] ,
where x is the separation to the plate, θ is the angle between the direction of the separation and the dipole moment vector, and p is the dipole moment magnitude.
The dipole moment is given by the sum of the induced and internal dipole moments (p_e and p_i, respectively), p=p_i + p_e.
The induced dipole due to some external electric field E_0 in the dielectric test masses is <cit.>:
p_e = 4πε_0 ( ε-1/ε + 2 R^3 ) E_0 .
Considering an external electric field of E_0∼ 2×10^5 [
As mentioned in section <ref>, large superpositions can be created using a wire.
The ampacity for copper nanotubes is found to be J=10^13 <cit.>.
With a thermal conductivity of σ = 4.6×10^7 at room temperature <cit.>, the electric field found from Ohm's law is E = J/σ∼ 2×10^5.
],
the externally induced dipole moment is of the order p_e∼ 6×10^-4 e for m=10^-15.
The internal dipole moment in the diamond-type crystal is estimated experimentally to be p_i = 10^-2 e (with e the electric charge) for spheres of R∼5 <cit.>.
A diamond sphere of mass 10^-16-10^-14 would correspond to R≈0.2-0.9.
The internal dipole moment of diamond spheres is not exactly known, therefore we take it to be constant at p_i = 10^-2 e, which may be an overestimation.
Since the internal dipole moment for microspheres is two orders of magnitude larger, we continue with just considering a total dipole moment of p=10^-2 e (at least for the range of masses considered here).
For an attractive dipole force between the plate and the test mass, given in eq. (<ref>), the position of the test mass (from eq. (<ref>)) can be found using numerical integration.
The trajectory z(t) of a single superposition instance for different z(0)=d are shown in figure <ref>.
The displacement due to the dipole interaction is dominant over the displacement resulting from the Casimir interaction [
This is the worst-case scenario where the dipoles are aligned such that cos(θ)=1, and for the estimated p_i mentioned previously.
] as one can see in comparing figures <ref> and <ref>.
Combining both effects, the initial distance allowed in order for the test mass not to collide with the plate is smaller than the minimal distance required in the absence of the plate.
§.§ Accumulated Phase
The time-dependent accumulated phase is given in eq. (<ref>) and plotted in figure <ref> for the displacement due to the dipole and Casimir interaction.
The dipole is taken to be maximum (θ=0 in eq. (<ref>)).
Even after the recombination of the spatial superposition has begun, the phase still grows a lot, which we attribute to the distance between the superpositions decreasing over time, also during this stage of the experiment.
A measurable phase can be achieved by experimentally realizing a superposition width of at least ∼30 and closing t within 1 for a mass of 10^-14.
A mass that is one order of magnitude smaller, although allowing a smaller separation to the plate due to the smaller attractive force towards the plate, also couples less gravitationally and results in an accumulated effective entanglement phase approximately two orders of magnitude smaller for the same superposition size.
§ ENTANGLEMENT PHASE FLUCTUATIONS FROM REPEATED EXPERIMENTS
The entanglement is witnessed by performing repeated measurements on the spin states of the spin embedded in the NV-centre of the diamond.
During this process, the test masses need to be prepared repeatedly in the same initial state.
A small derivation in the initial state results in a change in the final state and therefore yield a different effective entanglement phase.
We consider here imbalances in the distance to the plate (sec. <ref>), the imbalance in the magnetic field gradient (sec. <ref>) resulting in an imbalance in the superposition size, and an imbalance in the dipole orientation θ (sec. <ref>).
These imbalances are illustrated in Fig. <ref>.
However, first, we give an estimate of the minimally effective entanglement phase necessary for entanglement to be detectable in the presence of noise from imbalances in the initial conditions, in order to fix our experimental parameters.
§.§ Minimal Entanglement Phase
The detection of entanglement is done via a witness which is constructed by repeated measurements on the spin states of the test masses.
Appendix <ref> shows the derivation of the Partial Positive Transpose witness <cit.>, which was found to be best for the two-qubit QGEM setup <cit.>.
Its expectation value can be estimated as:
(𝒲ρ) ≈γ t - 1/2Φ_eff
at short time-scales, where γ is the decoherence rate, and γ_d the dephasing rate. A negative witness expectation is a measurement of an entangled state.
In order to find the minimal experimental conditions necessary for detecting entanglement, we consider now what phase Δϕ is necessary for the detection of entanglement.
§.§.§ Shot-noise limited phase
Fluctuations in the initial conditions of the experiment cause a run-to-run difference in the measurement of the entanglement phase.
We give a rough estimate of the minimally detectable entanglement phase given a fixed number of measurements.
Assuming that the variation in the initial conditions follows a Normal distribution, we use the shot-noise limited phase uncertainty:
Δϕ_SN = √(1/N) ,
where N is the number of trials.
We consider the effective entanglement phase to be detectable if it is more than five times (5σ-rule) larger than the phase uncertainty
Φ_eff≥ 5 Δϕ_SN .
We choose an experimentally realistic number of trials of N = 10000 [
Note that this number of measurements is an estimate based on the shot-noise of the actual number of measurements that one needs to witness entanglement, which is witness-dependent.
]. This corresponds to roughly two weeks of measurement if the time span of one trial is one minute. This is realistic for both the Einstein-elevator type of free fall experiments as well as the particle-launching type of experiments.
Therefore, the assumed phase uncertainty is Δϕ_SN = 0.01 rad and the minimally detectable phase difference Φ_det≥ 0.05 rad.
We note, that in principle, free fall experiments can be repeated almost as fast as the falling time is (a few seconds) and often fully automatically hundreds of thousands of times, which would significantly improve the success chances of the proposed experiments.
For sources of technical noise relevant to anticipated experiments like magnetic field gradient fluctuations, the initial position fluctuation (see sec. <ref>-<ref>), we assume the different quantities do not correlate and vary with a normal distribution 𝒩(μ, σ) around their nominal position μ with a (conservatively) chosen experimentally based standard deviation σ. The final effect of these fluctuations on the determined phase difference is reduced by the same factor of 1/√(N)
We are fully aware of the fact that the desired effect of gravity-induced entanglement does not have to be larger or even of the same order as “parasitic” effects as long as those effects can be corrected for by measuring their different dependence on, for instance, the distance between the micro-spheres. Low statistical uncertainty allows to “trade statistics for systematics” and detect even minute effects. Still, we stick to our conservative approach and consider the effect of interest to be differentiable from other effects if its phase exceeds the other effects by the minimal detectable phase Φ_eff in eq. (<ref>).
§.§.§ Decoherence and dephasing
As can be seen from eq. (<ref>), when witnessing the entanglement, the entanglement phase is counter-acted by the decoherence rate.
As was studied intensively in <cit.>
the decoherence rate also increases the number of measurements.
Similarly, dephasing effects and environmental noise sources will make the entanglement harder to measure, and increase the necessary number of measurements.
From the scattering with air molecules and blackbody photons we would expect a decoherence rate of at least 0.05 <cit.>.
Therefore γτ∼ 0.025 (since most decoherence happens for large Δ x only the interaction time τ is taken).
In order to get an estimate of the required entanglement phase also including decoherence we require that the witness value with decoherence is still -0.05, as it was without decoherence based on the shot-noise limited phase.
Thus we would require an effective entanglement phase Φ_eff∼ 0.10 rad.
This would correspond to a superposition width of Δ x≈29 for an initial distance d = 41.
Since the shot noise is taken to be 0.01 rad, and we consider imbalances in the 4 initial condition, we consider the fluctuation in the phase due to an imbalance to be at most 12.5%, which we use to determine the necessary precision of our initial conditions.
From eq. (<ref>) and appendix <ref> we see that the dephasing decreases the effective phase and increases the witness expectation value, respectively.
This will also lead to an increase in the number of measurements necessary for detecting entanglement.
The effect of the dephasing is discussed in more detail in sections <ref> and <ref>.
In the remainder of this paper, we consider the test mass to be a diamond NV center with m = 10^-14 and p = 10^-2 e, a plate with W = 1, an experimental time of 1, a superposition width of Δ x≈29 and an initial separation between the particles of 2d + W = 83.
§.§ Fluctuation in the entanglement phase due to an imbalance in d
If the test masses are not placed every time exactly at a distance d from the plate, but within a range d±δ d_1, with d_1 a small fluctuation, then the phase will fluctuate Φ_eff + δΦ_eff run-to-run.
For the starting condition z(0)=d±δ d_1, the phase and its fluctuation are plotted as a function of d_1 in figure <ref> in red.
From the plot, we see that in order to have a fluctuation δΦ_eff at most 12% of the original phase, for an initial distance of z(0) = 41 ±δ d_1, the uncertainty in d is restricted to be d_1 < 0.48 μ. [
The point of closest approximation is also dependent on d and found to be 20 μ for z(0) = 41.48 μ, 16 μ for z(0) = 41 μ and 7 μ for z(0) = 40.52 μ.
]
δΦ_eff/Φ_eff≤ 0.12 ⇒ δ d/d≈ 0.01 , for d=41 .
If d>41 then larger fluctuations are allowed.
For example if d=51, the allowed fluctuation such that δΦ_eff/Φ_eff < 0.12, is δ d_1<1.54.
However, due to the larger distances, the phase is more than two times smaller.
When placing the masses as close as possible, the room for error in initial conditions is smaller because the distances are smaller and the acceleration towards the plate goes with 1/z^4 or 1/z^5, but the accumulated phase is larger exactly because of this reason.
This dependence of the phase on the distance is also the reason for the asymmetry of the red lines in figure <ref> around Φ_eff=0.105.
§.§ Fluctuation in the entanglement phase due to tilted superposition
Similarly, if the creation of the spatial superposition is not perfectly parallel with respect to the plate but slightly tilted [
The superpositions can be tilted in two ways, in a symmetric way that keeps them parallel (Θ_1 = Θ_2), and in an asymmetric way such that they are no longer parallel (Θ_1 = - Θ_2). Both types of tilt were considered here and it was found that the asymmetric tilt provides the upper blue bound line in figure <ref>, while the symmetric tilt provides the lower bound blue line.
The reason for this difference can be found by looking at the effective phase (eq. (<ref>)).
Using a simple geometry argument the asymmetric tilt approximately reduces the second term in the expression for the phase (increasing the effective phase), while the symmetric tilt reduces the first term (reducing the effective phase).
], this results in a different effective entanglement phase run-to-run.
A tilt resulting in the superposition instances being within a range d±δ d_2 away from the plate is considered, the tilt could also be expressed in terms of an angle θ.
For the starting condition z(0)= 41 ±δ d_2, the phase and its fluctuation are plotted as a function of δ d_2 in figure <ref> in blue.
Again, we consider the a maximal phase fluctuation of 12%, the figure shows he needed precision is δ d_2 < 0.46, which slightly tightens the precision required previously δ d_1< 0.48 [
Since there is a difference in attraction towards the plate between the superposition instances, there will be a small enlargement of Δ x. This is not taken into account here since the result δ d_2 < 0.49 suggests the allowed difference in position is so small that this effect is negligible.
].
The asymmetry of the shaded regions in figure <ref> can be explained as follows.
The shaded regions are asymmetric around the line δ d_i = 0 (Φ_eff=0.105) because a larger d has a smaller initial attraction force towards the plate and has a larger separation at the end of the experimental time, t=2τ_a+τ.
This 1/z^4 or 1/z^5 dependence of the force results in an asymmetry in the plot.
The difference in the variations δ d_1 and δ d_2 can be explained from the expression of the effective phase.
While δ d_1 influences both terms in the effective phase (eq. (<ref>)) approximately in the same way, δ d_2 influences only the second or first term in eq. (<ref>).
Therefore the variation δ d_1 has less influence on the total effective phase (which is the difference between the first and second terms) compared to δ d_2.
There is an additional constraint on δ d_2 due to the dephasing of the superposition which arises due to the two superposition states having different Casimir-and dipole interactions with the plate and therefore picking up a relative phase.
The interaction of a single superposition with the deflected plate can imprint which-path information on the plate and dephase the test masses.
This source of dephasing is not relevant to the other imbalances because the other imbalances preserve the equidistance of the superposition instances to the plate, which only results in a global phase.
If there is some δ d_2>0 there is a non-global phase as well, which we denote as the dephasing phase ϕ_d:
|ψ(τ)⟩ ∝1/√(2)[ e^i ϕ_d|0⟩ + |1⟩] ,
The phase ϕ_d is given by the Casimir-and dipole interaction with the plate, ϕ_d∼ (V_C+V_D)t/ħ (see Eqs. (<ref>),(<ref>)).
The dephasing due to interaction with the point of the plate at the closest approach is: [
The expression of the non-global phase is dependent on which way the superposition is tilted. Here we assume that the superposition is tilted such that the |0⟩ state is closest to the plate. Although the expressions (<ref>), (<ref>) would be different if the state |1⟩ were closest to the plate, the total effect would be the same due to the symmetry of the setup.
].
ϕ_d = (γ_d^C + γ_d^D)t
γ_d^C = 3 c R^3/8π(ε-1/ε+2) [1/z_0^4 - 1/z_1^4]
γ_d^D = p^2/16πε_0ħ[ 1/z_0^3 - 1/z_1^3]
with γ_d^C,D the dephasing rate due to the Casimir (C), dipole (D) interaction. Note that this rate is time-dependent due to the time dependence in z.
And with z_0 and z_1 the distance to the plate for the |0⟩ and |1⟩ state, respectively.
Similarly to eq. (<ref>) the total dephasing is found numerically and plotted as a function of δ d_2 in figure <ref>.
In eq. (<ref>) we give the witness in terms of the dephasing rate ϕ_d.
From this expression, we find that for an effective phase of Φ_eff = 0.1 rad and a decoherence rate of γ = 0.05 for a time τ, a dephasing of ϕ_d = 0.2 rad would increase the witness by approximately a factor two.
In <cit.> this order of increase in the witness expectation value led to about an increase in the number of measurements of at least a factor of two.
For a dephasing of ϕ_d = 0.1 rad the witness expectation value stays approximately the same.
From figure <ref> we see that if we require a maximum dephasing of ϕ_d = 0.1, the restriction put on the fluctuation δ d_2 is
δ d_2 < 2.8 from the Casimir interaction and
δ d_2 < 10^-2 from the dipole interaction.
The dominant dipole interaction causes a lot of dephasing, and if the conducting plate setup is to be realized the dipole dephasing has to be mitigated.
Also, the Casimir interaction with the plate which cannot be mitigated puts strict restraints on the precision of the initial conditions, of the order of femtometers.
One can relax these constraints by for example increasing the number of measurements. If we allow ϕ_d = 0.2, which will increase the number of measurements noticeably, we would require δ d_2 < 5.6.
Alternatively, we could also increase the mass of the test masses, since Φ_eff scales with m^2, while ϕ_d scales with m, this would increase the effective entanglement phase more relative to the dephasing.
Increasing the separation to the plate also decreases the dephasing, as shown in figure <ref>, however, this also decreases the effective entangling phase.
§.§ Entanglement phase fluctuation due to Imbalance in ∂ B
Another initial condition that can experience fluctuations run-to-run is the strength of the magnetic field gradient used to create the superposition, ∂ B = ∂ B + δ (∂ B). Any foreseeable systematic fluctuation in the magnetic field gradient can influence the phase in each run of the experiment.
For the starting condition ∂ B = 5×10^5 ±δ (∂ B), the phase and its fluctuation are plotted in figure <ref>.
From this plot we can see that fluctuations of approximately δ∂ (∂ B) = 3.43×10^4, giving the ratio:
δ (∂ B)/∂ B∼ 0.07 , for B=5×10^5 .
Similarly to the fluctuation in d, a larger ∂ B allows larger fluctuations.
Figure <ref> is symmetric around the zero-fluctuation-value since there is a linear dependence of Δ x on ∂ B (see Eq. (<ref>)), and for small fluctuations in the superposition size the effective phase Φ_eff can be approximated as:
Δϕ = Gm^2/√(d^2+(Δ x)^2)τ/ħ[ 1 + δ (Δ x)/Δ x] - ϕ .
The dependence on Δ x and thus ∂ B is approximately linear for small variations.
Using Eq. (<ref>), we find the allowed fluctuation in Δ x.
Reading the right vertical axis in figure <ref>, we find that a 12% phase Φ_eff, variation corresponds to Δ x = 29 ± 2.0.
If the magnetic gradient is caused by the presence of a wire, then its fluctuation is thus caused by a fluctuation in the current of that wire, see <cit.>.
The fluctuation in the superposition size δ (Δ x) is therefore related to the fluctuation in the current I <cit.>
0.07 = δ (Δ x)/Δ x∼δ I/I .
Typically, a current density of J=10^13, which is the ampacity for the copper nanotubes mentioned in footnote <ref>, matches a current of I = 7 <cit.> [
In <cit.> the area of the test material in which the ampacity was measured was 7.2 × 10^-13, this would correspond to a wire with radius 478.
].
Therefore, given the allowed fluctuation in the superposition size in eq. (<ref>), we can estimate that the fluctuation in the current can at most be δ I = 0.48. Furthermore, we can also study the fluctuations in the current due to thermal effects in the wire (the Johnson-Nyquist noise) are approximated to be δ I ∼ 10^-12 at a room temperature, see <cit.>. Hence, Johnson-Nyquist noise is well within our estimate on the current fluctuations that satisfy δΦ_eff/Φ_eff≤ 0.12.
It should be noted that the effective phase is dependent on the protocol for creating the superposition.
In this paper, we use a very general protocol Eq. (<ref>).
There are more complicated protocols that can reach the same superposition size using magnetic fields that are experimentally easier to realize, see <cit.>.
§.§ Entanglement phase fluctuation due to imbalance in the dipole moment
Another initial condition that influences the trajectory of the diamonds and therefore the accumulated effective phase is the orientation of the dipole moments.
So far we have taken θ=0 in Eq. (<ref>), which gives the `worst-case scenario' in the sense that attraction due to the dipole moment is maximal (and therefore the separation is maximal).
Note that the angle θ is with respect to the vector going from the test mass towards the plate and is thus aligned with the -ẑ-direction for the test mass labeled 2 and the +ẑ-direction for the test mass labeled 1 (see Fig. <ref>).
We introduce small fluctuations around θ=0 by imagining that the dipoles of the test masses have the same angle ±δθ.
We assume for simplicity that the fluctuation on both test masses is the same. Fig. <ref> shows the range of fluctuations in the effective phase for δθ∈ [0,π/2].
Since the exerted dipole force goes with cos^2(δθ), the change in phase is the same for ±δθ.
For θ>0 (but smaller than π/2) the dipole force decreases and for the starting position d=41, the final distance to the plate increases, hence the accumulated entanglement phase decreases.
This can also be seen from figure <ref>.
The decrease again is non-linear because the dipole force goes with 1/z^4.
The bound on the dipole moment orientation is given by:
δΦ_eff/Φ_eff≤ 0.12 ⇒ δθ≈ 0.17 π ,
which can be read off from figure <ref> (0.17π≈π/6).
We have not considered fluctuations in the magnitude of the electric moment, p, because we have assumed that the magnitude of the electric moment is due to the internal dipole p=p_i, which is constant if the same test mass is used over repeated runs. But a similar analysis can be performed by varying p.
§ COHERENCE LOSS DUE TO THE CONDUCTING PLATE
In section <ref> the dephasing due to the plate was found.
Due to an imbalance in the initial conditions between the two test masses, there can be a net force acting on the plate.
We find the deflection due to fluctuations δ d_1 and δθ.
Additionally, we estimate dephasing due to thermal fluctuations in the plate.
§.§ Deflection of the conducting plate
A small uncertainty in the initial placement of the test masses relative to the plate, or in their dipole orientation, will lead to a net force acting on the plate that originates from the difference in the distance-dependent Casimir-and dipole interactions.
The net force causes a deflection in the plate, which is clamped at both ends in the
x direction.
We analyze the additional dephasing with respect to sec. <ref> due to the imbalances discussed previously: δ d_1, and δθ.
Note that δ (∂ B) is independent of the z(t) and does not cause any plate deflection.
In the linear setup in Fig. <ref> the net force could be considered a point force acting on the middle of the plate due to the alignment of the superposition with respect to the plate <cit.>.
In the parallel setup, the superposition is aligned in a direction parallel to the plate.
However, we also use the point-source approximation because we consider the length of the sides of the plate (of the order L≥1) to exceed the superposition width of the test masses (of the order Δ x∼ 30) such that Δ x/L∼ 0.03.
Compared to the size of the plate we approximate the superposition to be point-like.
Additionally, z(t)>Δ x(t) for any t, so although the deflection of the plate is actually in a superposition, we approximate it with a point-source approximation deflection.
Recall that the test masses and plate are in free fall, furthermore, we assume that the test masses are setup up around the mid-point of the plate.
The maximal deflection δ z at a distance a away from the mid-point of the plate, due to the maximal net point force F at the center of the plate is: <cit.>
δ z_max = F_max (L-2a)^2/192 E I (L+4a)
= F_max (L-2a)^2/16 W^3 E (1+4a/L) ,
with E is Young's modulus of elasticity of the plate, L the length of the plate, and W the thickness of the plate.
Eq. (<ref>) holds for any conducting plate, we specifically consider a Silicon-Nitride plate that is coated with a very thin layer of gold and has a thickness of 1, and sides of length L=1.
We assume that due to the gold layer being very thin, we can take the material properties of the plate to be those of Silicon-Nitride.
Silicon-Nitride has a Young's modulus of 270 and a density of 3.1 <cit.>.
The area moment of inertia in the plane of the plate is substituted to get the final result in eq. (<ref>),
I = ∫_-L/2^L/2∫_-W/2^W/2 z^2 zx = 1/12 W^3 L .
The force F_max is given at the point of closest approach to the center, at the time τ+2τ_a.
Note that at this time the superpositions have been recombined and the point-force approximation holds true, however, there is no Casimir/dipole dephasing since Δ x = 0.
If there is some maximal imbalance ±δ d_1 in the initial placement of the test masses, then the maximum force at a distance a from the center is:
F_max = F_C(z_min(d+δ d_1)) - F_C(z_min(d-δ d_1))
+ F_D(z_min(d+δ d_1)) - F_D(z_min(d-δ d_1)) ,
with F_C and F_D given in Eqs. (<ref>) and (<ref>), respectively.
The blue line in figure <ref> shows the magnitude of the deflection due to a maximal uncertainty δ d_1 = 0.48 in the initial placement of the test masses, as a function of the distance from the mid-point of the plate, a.
The maximal deflection is at the midpoint, δ z_max(a=0) = 0.012 (note that the x-axis is in millimeters while the y-axis is in femtometers, which is 10^-15).
Similarly, a net force can act on the plate due to a difference between the dipole moments.
The net dipole force at τ+2τ_a due to a maximal difference δθ = 0.17π (meaning θ_1 = 0, θ_2 = 0.17π) causes a deflection of the plate.
The magnitude of this deflection is plotted in figure <ref> in red, as a function of the distance a away from the mid-point of the plate.
The maximal deflection is 0.002.
Note that the imbalance δ d_2 at the time τ+2τ_a does not cause any deflection.
Using eq. (<ref>) with the new distances due to the deflection of the plate, we find the additional dephasing (compared to Fig. 11). For δ d_2 = 2.8 the additional contribution from the plate fluctuation is given:
Δϕ_d^C ≈ 0.0001 ,
which is negligible compared to the phase found in figure <ref> even though this was an overestimation by taking the maximal deflection constant over time.
§.§ Dephasing due to thermal fluctuations in the plate
Thermoelastic noise, often referred to as just the thermal noise of a membrane, is caused by inevitable local temperature fluctuations around the equilibrium. Temperature variations across the surface of the membrane cause tension and thus vibration <cit.>. According to the fluctuation-dissipation theorem, there is a corresponding damping process, which is referred to as thermoelastic damping <cit.>.
The thermal motion of the membrane will cause the distance between the spheres and the plate, as well as across the spatial superposition state, to vary. The effect on the trajectory can be neglected as the root-mean-squared (RMS) amplitude is many orders of magnitude smaller than the distance between the spheres and the plate. However, since the phase of the spatial superposition state is much more sensitive to potential variations, it might be affected. We assume that the spheres are centered with respect to the membrane. Since the extent of the spatial superposition (Δ x≈ 30) is small compared to the size of the membrane (L=1), the slopes of even modes at the center will vanish when δ d_2=0 [
When δ d_2 = 2.8, as found in section <ref>, the even mode plate deflections (see figure <ref>) will be maximally of the order 300. The additional dephasing found in the same way as in section <ref> is of the order Δϕ_d^C ∼ 2, which effectively (limiting to effectively one oscillation period as will be discussed later on in this section)
has a negligible effect of Δϕ^C ∼ 10^-5.
], and their effect will be of at least second order in Δ x/L≪ 1.
The first two modes are illustrated in figure <ref>.
Furthermore, during one complete oscillation period of the membrane, the two components of the spatial superposition state will experience the same overall phase shift, thus reducing the effective interaction time of this dephasing mechanism by at least a factor of 2π/ω_nmτ (with τ the experimental time, and ω_nm the natural frequency of the plate for the mode (m,n)).
Further, in thermal equilibrium, the modes will be exponentially populated, and thus the effect of higher modes decreases.
We therefore consider only the first odd mode ω_12.
To estimate the root mean squared (RMS) magnitude of the plate deflection, δ z_th, due to thermal noise at the temperature T = 1K, we use the partition theorem, equate the thermal energy to the potential energy of the plate k_B T = Mω_1,2^2δ z_th^2 and solve for δ z_th:
δ z_th≤√(k_B T/Mω_nm^2) .
Where k_B is the Boltzmann constant, T is the temperature of the membrane, and M=ρ WL^2 is the mass of the membrane. The natural frequencies of a membrane in a vacuum are given by <cit.>:
ω_nm = 1/2√(σ/ρ((n/L)^2+(m/L)^2)) ,
Here σ is the biaxial tensile stress, ρ is the density of the material, L is the side length, and n, m are the mode numbers. In particular, in the stress-governed regime (membrane), the resonance frequency is independent of thickness.
The resonance frequencies of a clamped square plate are given by <cit.>:
ω_nm = K_nm/L^2√(EW^2/12(1-μ^2)ρ) .
Here K_nm is a mode coefficient (K_12=74.296, see <cit.>), E = 270 the Young's modulus <cit.> and ρ=3.1 the density of silicon nitride, W=1 and L=1 the width and side length of the membrane and μ=0.2 the Poisson ratio <cit.>.
Finally, we obtain ω_12=2π× 35.6.
In <cit.>, the frequency of a 1 ×1 silicon nitride membrane from Norcada has been measured to be ω_12=2π× 211613. To be more conservative and allow for flexibility in material choice and sample-to-sample difference of membranes, we stick to the previously used model of a clamped square plate in eq (<ref>),
which gives ω_12=2π× 35.6.
For the RMS distance fluctuation at T=1 eq. (<ref>) gives δ z_th = 298.
We include the geometric reduction of order Δ x /L ≈ 0.03 (Δ x = 30, L = 1).
The effective differential distance to the surface (analogous to δ d_2) is approximately 9.
Also including the fact that the interaction time for the asymmetric part of the interaction is limited to effectively one oscillation period 2π /ω_12 of the membrane, the effect on the random phases is further suppressed by a factor of 3.6× 10^4, corresponding to only an effective δ d_2 of 2.5· 10^-19. Thus the effect can be neglected (see figure <ref>).
We note that the silicon nitride membranes must be coated with a conductor, for example, gold, to effectively act as a Casimir screen, but the coating can be made much thinner than the silicon nitride layer, and we assume the mechanical properties to change only slightly.
§ CONCLUSION & DISCUSSION
The experimental protocol considered in this paper is that of two free-falling test masses that are separated by a conducting plate and which are free to move towards the plate due to the Casimir-and dipole interaction. This would mean that we have to prepare at least two traps for the experiments, one for releasing the nano-diamond, and the other for capturing.
The conducting plate prohibits quantum electromagnetic interaction between the test masses, due to which the condition for the minimal distances can be removed to some extent.
The minimal distance was first introduced in order to keep the Casimir force subdominant compared to the gravitational force between the test masses.
However, in free fall there is still some (mass-dependent) initial separation required in order for the test masses to not fall into the plate within one second of experimental time.
The minimal distance for different masses such that the particle (with 0.48 fluctuation) does not hit the plate, and the superposition size needed to find an entanglement phase of 0.10 is given in table <ref> for different masses.
Compared to the absence of the conducting plate, we are able to bring the test masses closer and as a result, allow smaller values of Δ x.
For a diamond test mass with an embedded NV-center of m=10^-14, instead of a minimal distance between the test masses of ≈ 147 in the absence of the conducting plate, we now initialize the system with d = 83 and after 1 of experimental time during which the test masses are attracted towards the conducting plate due to the Casimir-and dipole force, the smallest separation of the test masses is approximately 33.
This setup allows for a smaller separation and thus a larger quantum gravitational interaction.
The entanglement phase is enhanced and the experimental parameters can be somewhat relaxed compared to the setup without the plate.
For a mass of 10^-14 a superposition size of 29 is enough for an entanglement phase of 10^-1.
In the absence of the conducting plate the same order can be reached for a superposition size of ∼ 100 (see figure <ref>).
Since the witnessing of entanglement requires repeating the experiment and performing the measurements many times, we also considered several types of imbalances in the initial conditions of the setup.
In appendix <ref> we found that the entanglement can be witnessed if Φ_eff>2γ t.
For a decoherence rate γ<0.05, the results for the maximum deviation in the initial conditions that is allowed in order for the entanglement to be witnessable are summarised in table <ref>.
We have also considered the dephasing due to the net force that the imbalances can exert on the plate and found that it was negligible (section <ref>).
In order for the experimental protocol proposed here to be realized one needs to very precisely know the initial positions.
Furthermore, as a worst-case scenario, we have considered here the dipole moment orientation to be towards the plate ± some fluctuation around this axis.
An opposite dipole moment orientation would result in a repulsive force between the plate and test mass and would thus require an initial positioning of the test mass close to the plate such that the total trajectory is approximately opposite compared to what is illustrated in figure <ref>.
Therefore most important in this setup and in determining the best initial separation is controlling the dipole moment direction and mitigating the dephasing due to the dipole, which dominates any other phase.
In conclusion, adding the conducting plate in a parallel configuration enhances the entanglement signal, but one needs to control the dipole in order for this setup to yield better results.
In this paper we have estimated the strength of the dipole moments based on previous measurements with silica microspheres. Fabrication and engineering of custom test masses with reduced permanent dipole moments could be a promising approach towards mitigating these background effects.
§ ACKNOWLEDGEMENTS
MS is supported by the Fundamentals of the Universe research program at the University of Groningen.
A.G. is supported in part by NSF grants PHY-2110524 and PHY-2111544, the Heising- Simons Foundation, the John Templeton Foundation, the W. M. Keck Foundation, and ONR Grant N00014-18- 1-2370.
S.B. would like to acknowledge EPSRC grants (EP/N031105/1, EP/S000267/1, and EP/X009467/1) and grant ST/W006227/1.
AM’s research is funded by the Netherlands Organisation for Science and Research (NWO) grant number 680-91-119.
ieeetr
§ WITNESSING ENTANGLEMENT
In order to experimentally detect entanglement, we construct a witness 𝒲.
From the separability condition of non-entangled states, one can construct the Positive Partial Transpose (PPT) witness <cit.>.
This witness was found to be more optimal for this type of experiment compared to for example a CHSH-type witness <cit.>.
The PPT witness gives a criterion for separable states, which results in the condition that a state is separable if its partial transpose has no negative eigenvalues (also called the Peres–Horodecki criterion).
Therefore, constructing the PPT witness:
𝒲 = (|λ_-⟩⟨λ_-|)^T_i ,
where |λ_-⟩ is the eigenvector corresponding to the minimal eigenvalue of ρ^T_i (the partial transpose of ρ), provides a way to test if a state is non-separable.
Because all separable states satisfy:
(𝒲ρ)
= (|λ_-⟩⟨λ_-|ρ^T_i) = (⟨λ_-|ρ^T_i|λ_-⟩) = λ_-≥ 0 ,
using the cyclic property and the invariance under partial transpose of the trace.
Therefore:
If (𝒲ρ) < 0 then ρ is non-separable.
This is a necessary and sufficient entanglement criterion for the qubit-qubit system considered here.
In a similar fashion as Ref. <cit.> we present a simplified witness based on the PPT criterion.
The final spin state given in eq. (<ref>) gives the density matrix ρ = |Ψ(t)⟩⟨Ψ(t)|.
The decomposition of this witness in a Pauli basis is: <cit.>
𝒲 = 1/4(1 ⊗ 1 - X ⊗ X - Z ⊗ Y - Y ⊗ Z ) ,
with I, X, Y, Z the identity matrix and the Pauli matrices.
Including the decoherence rate γ in the same way as was done in <cit.> and other works, and including the dephasing rate γ_d as given in section <ref>, the density matrix is given as:
ρ = 1/4[ 1 e^-iΔϕ+iΔϕ_d - γ t e^-iΔϕ- γ t e^iΔϕ_d - 2γ t; e^iΔϕ-iΔϕ_d - γ t 1 e^-iΔϕ_d - 2 γ t e^iΔϕ - γ t; e^iΔϕ - γ t e^iΔϕ_d - 2 γ t 1 e^iΔϕ+iΔϕ_d - γ t; e^-iΔϕ_d - 2 γ t e^-iΔϕ
- γ t e^-iΔϕ-iΔϕ_d - γ t 1 ]
Here Δϕ is the entanglement phase given in eq. (<ref>), Δϕ_d is the dephasing described in section <ref> and γ is the decoherence rate.
The witness expectation value (𝒲ρ) can then be simplified: <cit.>
(𝒲ρ)
= (ρ) + 2 (ρ_12) + 2 (ρ_13) - 2 (ρ_14) - 2 (ρ_23) - 2 (ρ_24) - 2 (ρ_34)
= 1 - e^-γ t( sin(Δϕ)[ 1+cos(Δϕ_d) ] + cos(Δϕ_d)e^-γ t))
≈ 1 - e^-γ t[ 2 sin(Δϕ) + e^-γ t] + Δϕ_d^2/2 e^-γ t[ sin(Δϕ) + 1]
≈γ t- Δϕ .
where the first approximation was found be expanding for small Δϕ_d, and the
last line approximation is obtained by expanding around t=0 and keeping first order terms.
The witness expectation value is negative, and therefore the entanglement can be measured if ω > γ, with ω t = Δϕ.
A small dephasing rate is negligible in this description. The condition for entanglement can be rewritten in terms of the effective phase:
Φ_eff > 2 γ t
|
http://arxiv.org/abs/2307.04910v1 | 20230710212826 | Self-Diagnosis and Large Language Models: A New Front for Medical Misinformation | [
"Francois Barnard",
"Marlize Van Sittert",
"Sirisha Rambhatla"
] | cs.CY | [
"cs.CY"
] |
CLASSY VII Profiles: The Structure and Kinematics of Neutral Gas and Implications for
LyC Escape in Reionization-Era Analogs[
Based on observations made with the NASA/ESA Hubble Space Telescope,
obtained from the Data Archive at the Space Telescope Science Institute, which
is operated by the Association of Universities for Research in Astronomy, Inc.,
under NASA contract NAS 5-26555.]
Cody Carr
=====================================================================================================================================================================================================================================================================================================================================================================================================
Improving healthcare quality and access remains a critical concern for countries worldwide. Consequently, the rise of large language models (LLMs) has erupted a wealth of discussion around healthcare applications among researchers and consumers alike. While the ability of these models to pass medical exams has been used to argue in favour of their use in medical training and diagnosis, the impact of their inevitable use as a self-diagnostic tool and their role in spreading healthcare misinformation has not been evaluated. In this work, we critically evaluate LLMs' capabilities from the lens of a general user self-diagnosing, as well as the means through which LLMs may aid in the spread of medical misinformation. To accomplish this, we develop a testing methodology which can be used to evaluate responses to open-ended questions mimicking real-world use cases. In doing so, we reveal that a) these models perform worse than previously known, and b) they exhibit peculiar behaviours, including overconfidence when stating incorrect recommendations, which increases the risk of spreading medical misinformation.
§ INTRODUCTION
Large language models (LLMs) have grown in popularity with an ever-expanding list of applications. Due to recent publicity, many have flocked toward ChatGPT as a result of its perceived human-like capabilities and accessibility <cit.>. Unfortunately, a symptom of this exaggeration of ChatGPT's abilities is a misplaced level of user trust <cit.>. As self-diagnosis via web searches is already a common practice <cit.>, applying intelligent systems to this domain is inevitable. Likewise, as we see global health worker shortages <cit.>, government entities and healthcare bodies may be tempted to employ LLMs as healthcare assistants or expertise replacements <cit.>.
Recent methods which evaluate the performance on medical exams ask a LLM (in this case ChatGPT) to pick answers from a list to conclude that ChatGPT passes the official or variations of the United States Medical Licensing Exam <cit.> with scores near the passing threshold of 60% <cit.>. This creates an inaccurate portrayal of LLMs' capabilities. Further <cit.> in Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models use ChatGPT's performance on medical licensing exams to argue for its use in medical student training. In this work, we set to evaluate the performance of ChatGPT in a more realistic scenario and evaluate its robustness. Additionally, we investigate the means through which medical misinformation could propagate when ChatGPT is used for self-diagnosis. We do this by first assessing ChatGPT's response to open-ended medical questions based on a modified subset of single-answer questions taken from the United States Medical Licensing Examination (USMLE) <cit.>. As a means of evaluating the robustness of ChatGPT, the aggregated data is then used to perform a sensitivity analysis.
Further, to simulate the experience of a general user, we ask two non-expert assessors who were, through detailed guidelines and procedures, instructed to categorize ChatGPT's answers as “Correct”, “Partially Correct”, “Incorrect”, or “Ambiguous”. Through this analysis, we find that a) the performance drops when the options are not provided to the LLM, b) LLMs fail to indicate uncertainty or provide disclaimers in their answers when answering medical questions, and c) an LLM lacks awareness and understanding to validate information even when presented with its own responses.
§.§ Generalizable Insights about Machine Learning in the Context of Healthcare
This paper investigates how LLMs, specifically ChatGPT, inject medical misinformation when used for self-diagnosis. The generalizable insights from our work are as follows:
* Our key motivation is that asking LLMs (ChatGPT in this case) to select an option from a list does not reflect its ability to assist in any meaningful real-world task in healthcare. To mimic LLM use in the real world, we consider the case of self-diagnosis by a general user and perform response quality evaluation a) without providing specific options and b) information dropout for sensitivity analysis.
* We ask human assessors to rate response quality on a more granular scale to analyze response quality and explore its impact on contributing to a new form of medical misinformation. Particularly, we observe that LLMs, specifically ChatGPT, neglect any disclaimers or indication of uncertainty within its responses, regardless of correctness.
* Moreover, we ask ChatGPT to evaluate its own answers. It demonstrates a lack of confidence in its responses, meaning that LLMs may also not be good tools for double-checking information.
* We develop a repeatable approach which can be used for testing the capabilities of LLMs for healthcare diagnostic purposes on other datasets. Particularly, it allows for evaluations where coder reliability without consensus is desired.
Our intended audience is everyone. For clinicians, we present a novel approach for non-expert assessment for medical applications of LLMs. For machine learning researchers, we provide evidence for a need for further efforts in explainability and interpretability for LLMs – especially in medical applications – and to develop fact-checkers. For other individuals, we provide insight into the risks and shortcomings of using ChatGPT for self-diagnosis. Overall, our evaluation methodology provides a blueprint to evaluate responses to open-ended questions to ground future work in healthcare and beyond.
Please note, although the information may apply to other LLMs, for the remainder of this work, we will focus on and refer only to the LLM, ChatGPT.
§ RELATED WORK
§.§ ChatGPT
ChatGPT is a natural language processing model distinctive for its narrative response style to user input <cit.>. Millions have interacted with the platform since its public release in November 2022 <cit.>, and there has been a surge of academic studies focused on the various applications of ChatGPT <cit.>. Commonly cited are papers focusing on the accessibility and adaptation of ChatGPT to education and learning enterprises. A subsection of these studies has explored how ChatGPT performs on an array of examinations <cit.>. Papers investigating the utility of ChatGPT have also highlighted its potential as a self-studying tool, leveraging its ability to provide tailored responses and immediate feedback based on student needs <cit.>. Further, ChatGPT has shown promise in its ability to assist in research and academic writing by improving efficiency and compensating for weaknesses in researcher knowledge <cit.>. However, these potential use cases have raised significant ethical concerns regarding plagiarism, bias, transparency, and inaccuracy <cit.>, among others.
§.§ Large Language Models in Healthcare
Healthcare has been a significant area of focus for research into the utility of LLMs, and of the LLMs most often studied, ChatGPT has generally demonstrated superior performance <cit.>. Various applications of ChatGPT in healthcare education have been investigated, including its performance on licensing examinations, enhancement of personalized learning, and assistance with clinical reasoning and complex medical concepts for medical students <cit.>. Other areas of study involving ChatGPT in the healthcare setting have tackled inefficiencies and inaccuracies in clinical workflow, medical research, and diagnosis. Recent papers have cited numerous applications of LLMs in clinical practice through streamlining processes of documentation, triage, and clinical data management <cit.>. The potential for LLM diagnostic assistance has also been investigated, incorporating patient questionnaires and medical imaging <cit.>.
Notably, ChatGPT’s performance when offering personalized medical advice has been evaluated with regard to its accuracy, comprehensiveness, patient readability, and the inclusion of humanistic care <cit.>. It has been suggested that ChatGPT can simplify medical jargon and, thus, improve both the patient-doctor relationship and accessibility to complex research <cit.>. Notably, most studies relating to ChatGPT applications in healthcare are conducted in a “laboratory environment,” not in clinical settings, which may affect their applicability to the real-world context <cit.>. Similar ethical dilemmas to those mentioned in Section 2.1 are relevant to LLMs in the healthcare domain.
§.§ Health Misinformation in the Digital Age
Though healthcare is a hyper-specialized field, individuals have long sought health-related information outside the formal healthcare system <cit.>. A plethora of literature exists concerning the risks of health information-seeking behaviour <cit.>. Nevertheless, the internet has become one of the most popular resources by which individuals attempt to learn about health and investigate personal health conditions <cit.>. Certain sources of online health information, including ChatGPT, may be new, but health misinformation remains a long-standing issue, simply adopting novel forms.
Congruent with the rise of LLMs has appeared what <cit.> term an “AI-driven infodemic,” defined as a public health threat materializing from the application of LLMs for the production of misinformative contents, scientific articles, and fake news. Researchers are not only concerned with the possibility of LLMs quickly writing large amounts of human-like texts with malicious intent but also with the more subtle propagation of information lacking scientific ground of support <cit.>. Such occurrences could produce vast amounts of low-quality scientific information and literature in the health field. Moreover, considering user interaction, <cit.> found that knowledge provided by the user in their prompts affects ChatGPT’s answers by occasionally overturning the knowledge encoded in its model. This phenomenon can occur to the detriment of answer correctness, demonstrating how the contents and phrasing of user input hold weight in the accuracy of ChatGPT’s health-related answers <cit.>.
§ METHODS
In this section, we first describe the considerations and set up for the decided dataset. Then, we describe assessor procedures and guidelines and how the output answers from ChatGPT were processed. Finally, we introduce our prompt generation and testing methodology using ChatGPT. Our approach utilizes non-medical experts for assessment as non-medical experts would better represent the general populous and their interpretation of digital healthcare information. The overall process is segmented into two experiments:
* Baseline ChatGPT Answer Analysis
* ChatGPT Robustness and Ablation Study Analysis
§.§ Dataset Considerations and Setup
To best replicate a realistic scenario, a representative dataset was required. We focus on <cit.> data for our experiments since the USMLE test is used for medical licensing examinations in the United States. The data used in this work was extracted from the work by <cit.>.
The USMLE consists of three “Steps.” USMLE Step 1, Step 2, and Step 3 are typically taken by medical students between their second and third year of study, after their fourth year of study, and after their first year of residency, respectively <cit.>. Each of these three tests consists of multiple-choice, single-answer, no-justification questions. We focused only on USMLE Step 1, as a regular user trying to self-diagnose would not express themselves in the manner presented in USMLE Step 2 and Step 3 and would not be aware of the more complex medical conditions unless they are an expert. Additionally, we only considered questions that could be presented in a textual manner, which excluded questions with accompanying images, plots, or any form of visual media. Therefore, the resulting dataset contained 94 single-answer questions that would be used to generate prompts for ChatGPT. Figure <ref> displays a sample of the 94 questions used for prompt generation.
§.§ Assessor Procedures and Guidelines
To evaluate ChatGPT's responses, we relied on non-medical-expert assessments as our goal was to, as closely as possible, simulate self-diagnosis by non-expert individuals. Non-experts are required to simulate self-diagnosis, without discussion or consensus among assessors, as a more realistic reflection of real-world cases and to reveal the potential for misinformation. Although the USMLE Step 1 test contains expert material, non-experts frequently encounter expert material when self-diagnosing. Therein, non-expert assessment was required for both experiments introduced in Section <ref>. Further details about these experiments are discussed in subsection <ref>.
Each assessor was provided with written procedures to guide how they categorized the data. These guidelines are presented in Appendix <ref>. The assessors were instructed to categorize the answers as “Correct” (C), “Partially Correct” (PC), “Incorrect” (I), or “Ambiguous” (A). Specific instructions and examples were provided for each category to guide the assessor's understanding and methodology. See Appendix <ref> for more details and examples.
Assessors were also instructed to utilize web searches to understand the topic matter to the best of their ability, as individuals self-diagnosing would use similar methods for research. However, unlike the typical individual self-diagnosing, the guidelines limited the assessor's web searches to reputable sources such as post-secondary institution pages, research papers, medical textbooks, and government resources. Finally, assessors were instructed to prioritize their intuition and reasoning over the guidelines when appropriate.
§.§ Prompt Generation
§.§.§ Baseline ChatGPT Answer Analysis
The first experiment of our approach established the foundation for all testing. To assess the ability of ChatGPT, we relied on the <cit.> API. To best simulate user interaction, each question was modified to yield an open-ended question format. This was done by removing the alphabetic answer options, adding the statement "In one sentence, answer the following question:" and replacing the "Which of the following" part of each question with "What." An example of this process is displayed in Figure <ref>, showing how the question in Figure <ref> would have been modified to generate the prompt. Both experiments utilized open-ended questions as their foundation. Finally, after passing the open-ended prompts through ChatGPT, a subset of questions was produced by filtering only questions whose ChatGPT answers were categorized as “Correct” by both assessors.
§.§.§ ChatGPT Robustness and Ablation Study Analysis
The second experiment was built on the foundation established in the first experiment. This experiment used the subset of aggregated questions categorized as “Correct” by both non-expert assessors. Using this subset of data, we generated new prompts by iteratively removing a sentence from the prompt and passing the modified prompt to ChatGPT. This process is repeated across all of a question's sentences, excluding the final question sentence indicated by the "What...?" structure. Given the example question in Figure <ref>, Figure <ref> shows an example of one iteration of the iterative sentence dropout where the first sentence has been removed.
§.§ Testing Methodology
Figure <ref> summarizes our testing approach and methodology.
As shown in Figure <ref>, an open-ended prompt was generated for the N_q=94 questions. Open-ended prompting was selected as individuals searching for medical information typically do not include a list of potential solutions. We denote this dataset as 𝒟 made up of x_j questions, o_j answer options, and correct answer y_j, such that 𝒟={x_j, o_j, y_j }_j=1^N_q.
Each question prompt, x_j, was passed to ChatGPT via the <cit.> API as a unique session to ensure no continuity between questions. All testing was done between February and March 2023 using the “text-davinci-003” model. We selected the “text-davinci-003” model as it was the model that promised the best performance and allowed for parameter adjustments. The temperature for each ChatGPT API call was set to zero to make the answers more focused and deterministic. A summary of the ChatGPT model selection and parameters are summarized in Appendix <ref>.
After collecting and processing the answer from ChatGPT, a non-expert assessor independently evaluated each ChatGPT answer, ŷ_j. The assessors were instructed to provide a label z ∈𝒵; 𝒵 = {C, PC, I, A} denoting whether the answer, compared to the ground-truth, y_j, was “Correct”, “Partially Correct”, “Incorrect”, or “Ambiguous”, respectively.
The assessment process can be expressed as a function f^k(ŷ_j,y_j)=z_j^k, for the k^th assessor. After the k non-expert assessors finished the categorization process, these categorizations were aggregated based on decided categorizations. Therefore, we defined a subset, ℬ, consisting of questions, x_j, categorized as “Correct”, z_j^k=C, by all k assessors. The process can be expressed as follows:
ℬ={x_j|z_j^k=C ∀ k ∈{1,2 }}
Completing the “Correct” answer aggregation process concluded the first experiment – Baseline ChatGPT Answer Analysis.
Our second experiment, ChatGPT Robustness and Ablation Study Analysis, was built upon the dataset and assessments completed during the first experiment. We conducted an ablation study over the questions within the subset ℬ via iterative sentence dropout. The ablation study aimed to simulate how a self-diagnosing individual may neglect to include certain information within their health-related searches. As a means of simulation, we would perform |i | iterations over an open-ended prompt x_j, where |i | is the number of i sentences in the prompt. For the i^th iteration, we would remove the i^th sentence from the prompt before running the prompt through ChatGPT, as expressed by x_j ∖ i, where x_j∈ℬ. All i sentences would be iteratively removed and processed with a notable exception when the i^th sentence contained the “What...?” sentence structure indicating that the i^th sentence was the question sentence which is mandatory for ChatGPT answer generation. We present this process as pseudocode in Algorithm <ref>.
After processing the iterative sentence dropout prompts through ChatGPT, non-expert assessor categorization was required. Identically to the first experiment, non-expert assessors were instructed to categorize ChatGPT's answers for the sentence dropout ŷ_j^'. Therefore the k assessors would process the output results as expressed by the function f^k(ŷ_j^',y_j)=z_j^'k, where z_j^'k is the categorization by assessor k for answer ŷ_j^'.
Similar to the first experiment stage, we completed further answer aggregation for answers categorized as “Correct” by the non-expert assessors. Therefore, we define a new subset, 𝒲, consisting of iterative sentence dropout questions x_j ∖ i, categorized as “Correct”, z_j^'k=C by all k assessors. Therefore, this aggregation process can be expressed as follows:
𝒲={x_j ∖ i|z_j^'k=C ∀ k ∈{1,2 }}
Completion of final question aggregation into subset 𝒲 marks the conclusion of the second experiment.
§ RESULTS
§.§ Experiment 1: ChatGPT Responses on USMLE Step 1 Open-Ended Prompts
Our first experiment explored the 94 open-ended prompts, x_j. Each prompt was individually processed with the ChatGPT API <cit.> on the “text-davinci-003” model. After independent assessor categorization, we obtained the results displayed in Table <ref>.
We chose to have ChatGPT evaluate and categorize its answers as a means of comparison. For self-evaluation purposes, we utilized the “gpt-3.5-turbo-0301” model as this model was the best and “most capable” model at the time of testing <cit.>. Additionally, as we did not require parameter fine-tuning and the “gpt-3.5-turbo-0301” model was trained on more data than the “text-davinci-003” model used for question answering, we wanted to give ChatGPT the best chance for success. Figure <ref> shows the categorization results of the assessors and ChatGPT's self-assessment.
As we are only concerned with answers categorized as “Correct,” we chose to simplify the results shown in Table <ref>. This simplification was accomplished by aggregating answers categorized as “Correct” by both assessors into one category and then summarizing all other combinations of categorization into “Other.” We determined that only nine out of the 94 questions were categorized as “Correct” by both assessors. Table <ref> displays the confusion matrix with these results.
We considered inter-assessor agreement (or intercoder reliability <cit.>) to evaluate our assessor reliability and agreement. For this study, we relied on simple agreement, denoted by ρ.
To determine ρ we utilize ℬ^', the complement of ℬ, such that:
ℬ^'={x_j| z_j^k = z_j^ℓ, z_j^k≠ C ∀ k,ℓ∈{1,2 } and k ≠ℓ}
Therefore, ρ can be computed as follows:
ρ = 100( | ℬ| + | ℬ^'|/| 𝒟|)
= 100( 9 + 69/94)
= 82.98%
§.§ Experiment 2: ChatGPT Robustness and Ablation Study Analysis
Our second experiment relied on the results of the first experiment. We performed an ablation study via iterative sentence dropout on the ℬ dataset. After processing the sentence dropout prompts through ChatGPT, non-expert assessors independently categorized the answers producing the results displayed in Table <ref>.
As a means to evaluate the robustness and sensitivity of ChatGPT, we had ChatGPT independently categorize its own answers. As previously discussed in Subsection <ref>, we used the “gpt-3.5-turbo-0301” model for the self-assessment. Figure <ref> displays the results of the categorization for the assessors and ChatGPT.
Similarly to the first experiment, the values presented in Figure <ref> were further simplified by aggregating the results into one of two categories – “Correct” or “Other.” All questions in 𝒲 were grouped into the “Correct” category and the remaining questions were placed in the “Other” group. Table <ref> displays this simplification in a confusion matrix form.
For consistency, we calculated the simple agreement, ρ, for these results. To determine ρ we rely on 𝒲^' where the assessors were in agreement but the answers were not categorized as “Correct” by all:
𝒲^'={x_j ∖ i|z_j^'k = z_j^'ℓ,z_j^'k≠ C ∀ k ∈{1,2 } and k ≠ℓ}
Therefore, ρ can be computed as follows:
ρ = 100( | 𝒲| + | 𝒲^'|/| ℬ|)
= 100( 36 + 14/56)
= 89.29%
§ DISCUSSION
In this section the primary focus is on the drawbacks and risks involved with using ChatGPT for (self-)diagnosis. Our aim is not to discredit self-diagnosis, as we understand that it is critical to healthcare and society as many individuals lack easily accessible healthcare, seek communities and shared understanding, among numerous other reasons.
§.§ Concerns of Misuse
ChatGPT has received great praise for passing medical examinations <cit.>. From these results, researchers have proposed employing ChatGPT in areas such as medical education and medical report creation. However, ChatGPT's ability to answer exam questions does not necessarily translate to true medical understanding and skill. Our results demonstrate that ChatGPT answers only a small subset (17/94)
of the questions correctly across two independent evaluators (see Figure <ref>).
While <cit.> evaluate performance on open-ended questions, they effectively a) only use a binary categorization for the responses (correct/incorrect) since they censor any responses which are ambiguous to conclude that ChatGPT passes the exam, and b) use a consensus process for evaluators instead of a complete independent assessment. We note that these lead to a far more optimistic evaluation of ChatGPT responses. On the contrary, we do a more critical and fine-grained analysis of ChatGPT responses via a completely independent assessment methodology by human evaluators.
Unlike practicing clinicians, ChatGPT is not tested and vetted on its ability, has not received accredited medical education, licensing or approval to work, nor has it proven the understanding or skillset required to substantiate its claims. We remain unable to verify the validity of the resources ChatGPT uses to generate its answers because the data on which it has been trained is unknown. If clinicians make mistakes, they are liable and could face severe consequences such as being charged with medical malpractice or losing their license. ChatGPT has no such liability that should it make a mistake, the lack of an accountability mechanism leaves little motivation to make the model more fit for use in medical contexts.
§.§ Healthcare Misinformation
Our fine-grained scale, which makes a distinction between "Correct" and "Partially Correct," reveals that the majority of ChatGPT responses are incorrect, closely followed by partially correct responses as shown in Figure <ref>. In the healthcare context, the model's performance in its current form is extremely concerning but not surprising since these are not reasoning about the information as clinicians do.
Qualitatively, we observe that ChatGPT fails to indicate any uncertainty in its answers, offering either a direct response to the prompt or priming responses with statements of “most likely” or “likely.” ChatGPT's form of answering questions can be explained with reference to two structures: that of the English language and that of ChatGPT's prompt engineering. Firstly, matter-of-fact statements made by ChatGPT can be interpreted as a symptom of how we provide answers in the English language by re-stating the key information presented in the question at the beginning of our answer. ChatGPT's use of this formality often communicates certainty in its answers where the answer is incorrect. Secondly, ChatGPT's prompt engineering involves the probabilistic filling of slots to produce an outcome rather than crafting a response from scratch <cit.>. The use of pre-determined sentence structures also contributes to the projection of certainty. Failure to indicate any doubt is particularly concerning with regard to lay user interaction, where a baseline of medical knowledge and the ability to decipher between true and false information cannot be assumed.
Our results substantiate the concerns raised by <cit.> regarding the subtle forms through which LLMs such as ChatGPT may disseminate health misinformation. Though various studies have suggested that ChatGPT can be used for educational purposes <cit.>, namely through supporting the training of medical students and healthcare workers, they fail to aptly consider the negative effects of ChatGPT's inaccuracies in this use case. Suppose ChatGPT is used to train medical students and healthcare professionals in its current state. In that case, it may prevent these individuals from absorbing accurate information and, if left unchecked, facilitate misinformed conceptual understandings. Evidently, there is a risk that this phenomenon may adversely affect the treatment offered to patients by a healthcare provider who has used ChatGPT to supplement their knowledge.
Numerous works have suggested using AI as a means of combatting misinformation <cit.>. However, as discussed, using LLMs such as ChatGPT introduces a new form of medical misinformation. Even recently, despite efforts to use deep learning models in debunking misinformation about COVID-19, there is a lack of research on how to help the general public detect misinformation and improve their trust at the same time <cit.>. As ChatGPT is a black-box system trained on an unreleased dataset, transparency and explainability are paramount to safe and trustworthy applications in healthcare. Machine learning research focused on incorporating expert reasoning is another alternative to counter the unintended addition of medical misinformation into these types of systems.
§.§ Overt Optimism and Ablation Study Insights
This paper aimed to evaluate the robustness and feasibility of applying LLMs, specifically ChatGPT, to healthcare diagnostic systems. In addition to the human evaluators, we also asked ChatGPT to grade its responses. Adding ChatGPT as an assessor helps us analyze whether it can be used for double-checking facts.
This initial comparison displays a common shortcoming of ChatGPT – its overly optimistic disposition. In the first experiment (Figure <ref>), the human assessors tend to err on the side of caution and categorize most answers as “Incorrect”. ChatGPT takes a very optimistic approach as 57.45% of the answers it evaluated were marked as “Partially Correct”. This overly optimistic tendency can lead to misleading and missed diagnoses (False Negatives), which can have dire consequences.
We see this trend continue in the evaluation of our second experiment results. As shown in Figure <ref>, assessors one and two categorized 64.29% and 75.00% of answers as “Correct”, respectively. On the other hand, the traditionally optimistic ChatGPT categorized only 50.00% of answers as “Correct”. Although we previously established ChatGPT as overly optimistic, in this case, we see that ChatGPT is optimistic when there is higher risk and more cautious in cases where that optimism is more welcome. This is the opposite of what we need from a healthcare application.
As an example, we can consider the answers provided for USMLE question number 108 displayed in Figure <ref>. When evaluating iterative sentence dropout for the question shown in Figure <ref>, some of the responses that both assessors categorized as “Correct” were categorized as “Partially Correct” by ChatGPT. A sample of some of the answer evaluations are summarized in Table <ref>.
From the sample answer categorizations shown in Table <ref>, we can see how ChatGPT lacks human intuition and is cautious when it is unnecessary. In this case, we see that ChatGPT's answer includes information in addition to the correct answer regarding exercise and healthy diet. This case highlights the importance of human decision making and intuition. Even though the assessor guidelines indicate that answers with additional content should be considered as “Partially Correct,” that reasoning should be overruled anytime human intuition and understanding are justified. See Appendix <ref> for more information.
Based on human experience, a family doctor telling a patient to regularly exercise and maintain a healthy diet is nothing out of the ordinary. Regardless of ailment, this sort of advice is commonplace for many patients. Therein, both assessors justified their answers along these principles as this additional information, while unnecessary, does not logically negate the correctness of the answer. Conversely, ChatGPT approaches this answer with caution and categorizes the answer as partially correct for both cases. This discrepancy is just one example of many where ChatGPT is, at times, unnecessarily cautious then otherwise overly optimistic when risk is elevated.
As reflected in Table <ref>, the removal of a single sentence as an ablation study, impacts the perceived correctness of ChatGPT's answer. Therefore, we can see that even minimal modifications impact ChatGPT's ability to answer medical questions. This is especially concerning as these questioned are written to be posed to medical students who have an assumed knowledge base that wouldn't be found within the general population. Therefore, it could be assumed lay users would not phrase their concerns in this manner.
Limitations
Although the USMLE Step 1 dataset was used to simulate an individual self-diagnosing, this dataset may still be too explicit compared to typical web searches. We attempted to remedy this concern within our ablation study, yet we believe that singular sentence dropout likely still provides more information than what an individual would provide. A valuable future extension or work could aim to gather samples of real user searches relating to self-diagnosis as this would allow direct application rather than reliance on synthetic data. Next, by design, LLMs tend to do well on tasks which are well represented in the training; therefore, niche problems may have even lower performance <cit.>. This also raises an equity issue, where ailments or questions related to underrepresented groups may suffer from poor performance. While we do not explicitly study this task, future work can consider variations across medical specializations.
§ CONCLUSIONS
While large language models make headlines for passing medical licensing exams, and are consequently being considered as candidates to train the next generation of healthcare professionals, we find that LLMs' abilities are (understandably) modest at this time. More importantly, this misplaced trust in these systems can lead to reliance on their use for self-diagnosis by the general public. Our critical analysis of ChatGPT's performance on a medical licensing examination reveals a more humbling picture, where we find that ChatGPT is optimistic when there is higher risk, while it is more cautious in cases where that optimism is warranted. Unfortunately, this can lead to new forms and modes of healthcare misinformation. This examination therefore opens up challenges for ML researchers to build AI-powered models that can reason and respond responsibly, while providing ample insights for clinicians and the general public about their decision-making.
§ ASSESSOR PROCEDURES AND GUIDELINES
§.§ Overview
This document aims to provide the needed information to help an individual (the assessor) review answers from ChatGPT (a LLM) based on USMLE (US Medical Licensing Examination) questions. The goal of this task is to, to the best of your ability, categorize ChatGPT’s responses to the open-ended USMLE questions.
§.§ Categorization
For each question, ChatGPT will provide an answer. The assessor must then group the answer into one of four categories. The groups are presented, with their unique character identifier specified in brackets, as follows:
* Correct (C)
* Partially Correct (PC)
* Incorrect (I)
* Ambiguous (A)
§.§ General Guidelines
Often times non-expert users self-diagnosing will rely on web searches for research. Therefore, the categorization task may require additional research and web searches by the assessor to confidently make a decision. When searching for terminology, make sure to only rely on reputable sources. Examples of such sources include but are not limited to:
* Post-secondary institution pages (with a .edu URL)
* Medical School pages such as John Hopkins
* Research papers and medical textbooks
* PubMed
* ScienceDirect
* Etc.
* Government and Health Institutions
* National Institute of Health (and other .gov pages)
* Mayo Clinic
* Etc.
When additional resources are required, the goal of the assessor is not to gain an exhaustive understanding of the topic, rather an understanding that is “good enough” is sufficient to determine, with confidence, which category is most appropriate. Luckily, in practice, this task is far easier than it may appear as searches consisting of the terminology tend to present useful and trustworthy resources. Often it may be useful to search for “<correct answer> vs <ChatGPT answer>” where <correct answer> is the correct multiple choice answer and <ChatGPT answer> is the primary term or topic that ChatGPT presents as its answer. Finally, if you absolutely must, Wikipedia may be used as a sanity check, but avoiding using Wikipedia as the sole factor for decision-making is advised.
§.§.§ Additional Considerations
When determining the "correctness" of an answer, there are two rules that overrule other factors:
* If categorization is possible or aided by sound intuition, rely on human intuition for decision making.
* Some prompts have "human" factors in play such as questions that ask for "What is the most appropriate response", or similar. In these cases, human intuition and implicit understanding may overrule other guidelines presented in this document.
* If the answer includes an answer that is present among the multiple-choice options, it is automatically “Incorrect'.'
* Although the prompt is presented to ChatGPT in as an open-ended question, discernment is critical to diagnostics. Therefore, as we are aware of the intended “Correct” answer, the inclusion of other answers in the response will be “Incorrect” as they would be considered “Incorrect” if an individual were to write the USMLE test as a stand-alone examination.
§.§ Categories
In the following subsections, we will discuss each category and what level of “correctness” is required for that answer to be categorized as such. For each category, examples of theoretical ChatGPT outputs will be provided with an explanation for why the selected category is correct. Additional information may be included in the tables but will be explained as needed. Each of these examples will be based on the example in Figure <ref>:
§.§.§ Correct (C)
Answers labelled as “Correct” (C) must be obvious and correct within the context of the provided solution. As ChatGPT is prompted to explain, there are instances where ChatGPT may provide an initial answer in the first couple of sentences that is vague. Still, the explanation can aid in creating a distinction between categories. Lastly, for an answer to be labelled as correct, the assessor must rely on intuition and common sense to determine if the answer is presented to an acceptable level of correctness to be categorized as such.
Given the provided pet example shown in Figure <ref>, Table <ref> provides examples of what would be considered “Correct” answers.
p3cmp5.25cmp5.75cm
Examples of “Correct” answers with their categorization “type”, example answer, and categorization rationale.
Type Example ChatGPT Answer Categorization Rationale
Type Example ChatGPT Answer Categorization Rationale
Direct, correct answer The child’s pet is most likely a dog. This is because dogs are the most likely pets to be taken for walks and are known for enjoying fetch. In this case, the ChatGPT answer is direct and clear. The first sentence is correct, and the explanation provides reasoning without modifying the answer.
A correct answer with extensions The child’s pet will most likely be a dog. This is because dogs are the most likely pets to be taken for walks and are known for enjoying fetch. The child may also enjoy raising cats or birds. While pets provide different challenges, they enjoy interactions with their owners and their own means of physical exertion. Although the theoretical ChatGPT answer provides additional extensions, this does not negate the correctness of the answer. The correct answer is still provided in this case, and the additional information does not nullify that answer. ChatGPT sometimes adds additional details, but these should be considered extensions as they do not invalidate the original answer.
A correct answer with specific examples The child’s pet is most likely to be a dog. As indicated by the hair, the dog may be a breed such as a Maltese or Poodle. These dogs have hair instead of fur coats typically associated with lower shedding and hypoallergenic breeds. In this case, the theoretical output correctly identifies the answer while providing additional unrequested information. Therefore, adding examples does not invalidate the correctness of the original statement. Although uncommon, the examples, in this case, do not change the overall statement and answer produced by the ChatGPT output.
A correct answer with a synonym The child’s pet is most likely a Canis lupus familiaris. Canis lupus familiaris is a mammal often encountered as a domesticated pet. This is because Canis lupus familiaris is the most likely pet to be taken for walks and is known for enjoying fetch. While not the exact terminology, the theoretical ChatGPT answer utilizes the binomial name of dogs rather than explicitly stating “dog.” ChatGPT may include alternative nomenclature that is still correct, as they mean the same thing as the identified answer.
§.§.§ Partially Correct (PC)
Answers labelled as “Partially Correct ”(PC) are close to being fully correct but fall short of being entirely accepted as correct. This distinction typically occurs in one of three ways:
* Broader scope, general, answer that includes the correct answer.
* A subset of the correct answer.
* The correct answer with additional answers.
To better understand these three cases, we will utilize a general example with answers that could be provided that would make the answer partially correct. Please consider the general example question shown in Figure <ref> as a reference for understanding the rationale for grouping answers as “Partially Correct” shown in Table <ref>.
p3cmp5.25cmp5.75cm
Examples of “Partially Correct” answers with their categorization “Type”, example answer, and categorization rationale.
Type Example ChatGPT Answer Categorization Rationale
Type Example ChatGPT Answer Categorization Rationale
Broader Scope The child’s pet is a mammal. The pet is indicated to have hair which is common among household pets such as dogs, cats, rodents, and many others. While a dog is a mammal, the example answer is far too broad of a scope to be considered a correct identification. This instance would be partially correct as the provided answer is logically sound and void of mistakes yet is far too broad and general to be accepted as correct.
A subset of the correct answer The child’s pet is likely a Maltese or Poodle, as indicated by the presence of “hair” rather than “fur.” Additionally, Maltese and Poodle dog breeds are intelligent breeds known to be great companions and require physical exercise. Taking these dogs for walks and playing fetch are excellent activities for stimulating and exercising these dog breeds. Although Maltese and Poodle are known dog breeds, the specificity would make this answer partially correct. Similar to the broader scope case, this answer is both logically sound and void of errors, but the overly specific answer is considered a subset of the correct answer. In other words, a Maltese or Poodle is a type of dog but does not state “dog” as an answer and would be partially correct. While this example may seem extreme, in the healthcare domain, an overly-specific answer that is a subset of a higher-level diagnosis may also be incorrect or a decision that cannot be made with specialist aid.
A correct answer with additional answers The child's pet is likely a dog or a cat. Dogs and cats are known to be great pets for children and adults alike. While uncommon, both dogs and cats can be trained to play games such as fetch and taken on outdoor walks. In this case, although the correct answer, “dog,” is present, an additional (incorrect) answer, “cat,” is also present. In this case, while the correct answer is present, the additional answers prevent this answer from being considered fully correct.
§.§.§ Incorrect (I)
The “Incorrect” (I) category is reserved for incorrect answers. Answers that may be incorrect for multiple reasons, but the general understanding should be “if an answer falls into neither the `Partially Correct' nor `Correct' category, the answer is `Incorrect'.” Additionally, as the original question used to generate the open-ended prompt is multiple choice, if the provided answer is another answer (provided in the multiple-choice question) other than the correct answer, it is automatically considered incorrect. As a non-exhaustive list, examples of categorization into the “Incorrect” category are provided in Table <ref>.
p3cmp5.25cmp5.75cm
Examples of “Incorrect” answers with their categorization “Type”, example answer, and categorization rationale.
Type Example ChatGPT Answer Categorization Rationale
Type Example ChatGPT Answer Categorization Rationale
Explicitly Incorrect The child’s pet is most likely a cat. Cats are known to be great pets and interact well with children. Although uncommon, cat owners sometimes will take their cats on walks and teach their cats how to play fetch. In this case, ChatGPT’s solution is explicitly incorrect. While the rationale is concrete, the most likely and desired answer “dog(s)” is more typically representative of descriptions provided in the question.
Incorrectness caused by symptom identification The child is most likely to experience a sense of companionship and an improved sense of responsibility. Owning and taking care of pets, such as dogs or cats, requires a great deal of responsibility and the child can learn responsibility working with a pet. Likewise, working with pets can provide a sense of companionship between the child and their pet. In this case, ChatGPT focuses on a symptom or a case that is a result of the correct answer. While the answer does discuss dogs in the context of pet ownership, the main focus is on aspects concerning companionship and responsibility. While a similar case may be partially correct, this instance would be considered incorrect as the focus is on companionship and responsibility - symptoms that are not solely associated with the root cause. Therein, the correct solution is not present or adequately represented. Unlike the partially correct example, this example is incorrect as the solution is also void of any subset of the correct answer.
Incorrect due to irrelevance The child is most likely to do athletics due to their enjoyment of taking their pet for walks and playing fetch. As these activities require physical exertion, the child is cultivating an active foundation and would likely enjoy athletic activities. This answer is explicitly incorrect as the theoretical ChatGPT answer misinterpreted the question and provided an irrelevant answer. This answer does not address the proposed question and focuses on an irrelevant topic.
§.§.§ Ambiguous (A)
The “Ambiguous” (A) is the final category. Any answer that the assessor is absolutely uncertain about should be categorized as “Ambiguous” when categorizing the content. The ambiguous category should be reserved and considered as the last-case scenario option. Finally, as per its namesake, if an answer is truly ambiguous, it should be labelled as such.
§.§ Evaluation Steps
In this section, we will outline what is expected of the assessor for each question. We will use this section to provide general steps and best practices to obtain the highest degree of agreement across assessors.
§.§.§ Setup
When grouping solutions, it is vital that a few guidelines are followed:
* This task is an individual task.
* Please ensure that the marking decisions are made by yourself alone and not influenced by a discussion with other markers.
* Ensure you only view your own spreadsheet sheet while assessing. Refrain from looking at others’ evaluations while completing this activity.
* Do not ask for help from third-party entities or groups (such as clinicians or medical experts) unless instructed.
* It is vital that all the categorization activities are decisions made by a single assessor alone.
* Ensure that you understand the categories and what dictates how an answer should be categorized.
With the guidelines in mind, you will find the spreadsheet software presented as shown in the example in Figure <ref>.
As shown in the example in Figure <ref>, you will need to consider the contents on the screen. For a detailed understanding, a summary of the contents are as follows:
* q_num
* The question number on the USMLE test.
* This value can be ignored by the assessor.
* q_text
* The cells in this column include the original question text used to generate the prompt for ChatGPT. For these tests, multiple-choice, single-answer questions were selected and modified to be presented as open-ended questions.
* Note: Before presenting the prompt to ChatGPT, each question would have had the multiple-choice answers removed and the last sentence modified to present the prompt as an open-ended question.
* correct_answer
* This column displays the correct answer corresponding to the letter options provided in the multiple-choice question.
* In the case of the provided example, the correct answer is “B” or “Example Answer B.”
* usmle_1_a_category
* The cells in this column are where the assessor should indicate their decided category.
* In the case of the example, the assessor, would have determined that the answer output by ChatGPT was incorrect and would indicate as such with “I.”
* usmle_1_a_0
* The cells in this column present the answer provided by ChatGPT.
* The marker will primarily focus on the content in this cell and compare those results to the correct answer to determine the solution category.
§.§.§ Evaluation
After ensuring your setup is complete, you will be ready to evaluate the answers. Please familiarize yourself with the provided example presented in Figure <ref>, as references will be made to the names of columns presented in that section.
For each answer, the following steps are recommended to achieve accurate groupings while avoiding wasted time.
* Read the last sentence of the question (in q_text).
* Although the prompt provided to ChatGPT will be slightly modified, understanding what ChatGPT was asked for will significantly aid in the categorization process.
* Read the first two-to-three sentences of the solution (usmle_1_a_0) and check if the correct answer is directly provided.
* Informally, determine the answer’s category based on the initial observed content.
* Read through the entire solution (usmle_1_a_0), verifying that the primary presented solution is consistent throughout the answer.
* There are instances where ChatGPT may appear to provide the correct answer in the first sentence, but further reading indicates this not to be the case.
* If further information is required, perform a web search on the presented solution and the correct solution.
* Utilizing web information, intuition, and understanding to determine the most appropriate category.
* Repeat start from Step 1 for each question.
Additional Notes:
* If you find that you are stuck on a question for any reason, highlight the question cell and continue categorizing other answers. You can always come back later.
* Do your best to follow the logic and reasoning methodology outlined in this paper.
§.§.§ Completion
Upon completion please report back to the primary contact who will be indicated to you through external communication. Depending on the results and inter-assessor variability, further discussions may be requested to maximize consistency across assessors.
Thank you for reading through this document and we appreciate your help in reviewing this data.
§ CHATGPT MODEL SELECTION PROCESS AND MODEL PARAMETERS
In this study we determined that the “text-davinci-003” model would be best suited for our purposes. As of April 2023, the improved GPT-3.5 and GPT-4 models do not provide fine-tuning options and GPT-4 is currently only in a limited beta stage <cit.>. As a means of repeatability, all of our API calls were made between February and March 2023 using the parameters summarized in Table <ref>.
|
http://arxiv.org/abs/2307.04431v1 | 20230710091152 | PSO-Based Optimal Coverage Path Planning for Surface Defect Inspection of 3C Components with a Robotic Line Scanner | [
"Hongpeng Chen",
"Shengzeng Huo",
"Muhammad Muddassir",
"Hoi-Yin Lee",
"Anqing Duan",
"Pai Zheng",
"Hongsheng Pan",
"David Navarro-Alarcon"
] | cs.RO | [
"cs.RO"
] |
Article Title]PSO-Based Optimal Coverage Path Planning for Surface Defect
Inspection of 3C Components with a Robotic Line Scanner
1]Hongpeng [email protected]
1]Shengzeng [email protected]
2]Muhammad [email protected]
1]Hoi-Yin [email protected]
1]Anqing [email protected]
1]Pai [email protected]
3]Hongsheng [email protected]
[1]David [email protected]
*[1]Faculty of Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong
[2]Faculty of Construction and Environment, The Hong Kong Polytechnic University, Kowloon, Hong Kong
[3]Shanghai Microintelligence Technology Co. Ltd, Shanghai, China
The automatic inspection of surface defects is an important task for quality control in the computers, communications, and consumer electronics (3C) industry.
Conventional devices for defect inspection (viz. line-scan sensors) have a limited field of view, thus, a robot-aided defect inspection system needs to scan the object from multiple viewpoints.
Optimally selecting the robot's viewpoints and planning a path is regarded as coverage path planning (CPP), a problem that enables inspecting the object's complete surface while reducing the scanning time and avoiding misdetection of defects.
However, the development of CPP strategies for robotic line scanners has not been sufficiently studied by researchers.
To fill this gap in the literature, in this paper, we present a new approach for robotic line scanners to detect surface defects of 3C free-form objects automatically.
Our proposed solution consists of generating a local path by a new hybrid region segmentation method and an adaptive planning algorithm to ensure the coverage of the complete object surface.
An optimization method for the global path sequence is developed to maximize the scanning efficiency.
To verify our proposed methodology, we conduct detailed simulation-based and experimental studies on various free-form workpieces, and compare its performance with a state-of-the-art solution.
The reported results demonstrate the feasibility and effectiveness of our approach.
[
*
August 12, 2023
===================
§ INTRODUCTION
Defect inspection is essential to quality control, process monitoring, and non-destructive testing (NDT) in the manufacturing industry (Chen et al., chen2022novel; Chen & Yang, chen2020arrival; Luo & He, luo2016cost).
Specifically, manufacturing processes in the 3C industry are highly sophisticated and demand detailed and accurate defect inspection.
Traditional defect inspection approaches typically rely on visual inspection of an intermediate/finished product by a quality control or quality check inspector.
This sole dependence on human workers is a problem for regions and countries with a shortage of manpower (Liu et al., liu2021task; Ming et al., ming2020comprehensive). Furthermore, human-based inspection is inherently subjective, hence, prone to errors.
To address these problems, various researchers have reported the automatic surface inspection system for free-form components (Li et al., li2022five; Yang et al., yang2023template).
Recently, automatic detection systems equipped with an industrial-grade line scanner, depth camera, and robotic manipulator has been developed to offer effective and rapid non-contact measurement (Huo et al., huo2021sensor; Liu et al., liu2022coverage).
During the defect inspection task, the robotics inspection system scans the surface of the target workpiece exhaustively from different viewpoints. Planning an inspection path can be considered as the CPP problem (Molina et al., molina2017detection).
Estimating a CPP strategy for automatic inspection consists of three tasks: (1) determining the viewpoints to measure the workpiece’s surfaces, (2) generating a sequence to visit all viewpoints in a time and kinematically optimal way, and (3) planning a feasible path to travel to each viewpoint.
Additional criteria can be defined while planning the coverage path, including full coverage of the target surfaces and the resulting cycle-time for the inspection task (Glorieux et al., glorieux2020coverage).
The existing CPP methods can be divided into two coarse categories: two-dimensional and three-dimensional methods.
Various researchers reported two-dimensional (2D) CPP for mobile robots in floor cleaning, bridge crack monitoring, and weed mowing tasks (Almadhoun et al., almadhoun2016survey; Galceran & Carrreras, galceran2013survey).
Veerajagadheswar et al. (veerajagadheswar2020motion) developed a motion planner for floor cleaning.
Polyomino tiling theory was adapted to define reference coordinates and generate a navigation path to maximize the area coverage; Real-time experiments in different scenarios tested the planner on a Tetris-inspired shape-shifting robot. Hung M. La et al. (la2013mechatronic) proposed an autonomous robotic system for precise and efficient bridge deck inspection and identification, where a boustrophedon decomposition was applied to solve the CPP problem.
Lim et al. (lim2014robotic) developed an automatic detection and mapping system for automatic bridge crack inspection and maintenance; They used an improved genetic algorithm to search for a CPP solution to minimize the number of turns and detection time while achieving an efficient bridge inspection.
Danial Pour Arab et al. (pour2022complete) presented a CPP algorithm providing the optimal movements over an agricultural field; First, tree exploration was applied to find all potential solutions meeting predefined requirements, and then, a similarity comparison was proposed to find the best solution for minimizing overlaps, path length, and overall travel time.
It must be remarked that 2D CPP methods cannot be adopted directly for a three-dimensional (3D) CPP problem, as the level of complexity in 3D space is much higher than in 2D space.
In most 2D applications, a complete planner map is available during planning.
Most 3D CPP methods have to plan the paths from partial or occluded 3D maps.
A CPP method for 3D reconstruction based on Building information modeling used a robot arm and a lifting mechanism for wall painting at construction sites (Zhou et al., zhou2022building).
It consists of a two‐stage coverage planning framework, a global planner that can optimally generate the waypoints sequence, and a local planner that can provide the mobile base pose.
The authors reported that this method could ensure coverage of all waypoints and improve painting efficiency.
Hassan and Liu (hassan2019ppcpp) proposed an adaptive path planning approach cable of updating the paths when unexpected changes occur and still can attain the coverage goal.
Zbiss. K et al. (zbiss2022automatic) reported a path-planning method for collaborative robotic car painting.
This proposed algorithm depends on computational geometry and convex optimization, and Morse cellular decomposition and boustrophedon algorithms are applied for path planning to generate a feasible and collision-free trajectory.
A CPP method is based on Unmanned Aerial Vehicles (UAV) equipped LiDAR for bridge inspection (Bolourian & Hammad, bolourian2020lidar).
This method combined a genetic algorithm and an A* algorithm to find a barrier-free and shortest path. This method planned the near-optimal and feasible path.
Recent studies on 3D CPP for industrial product quality detection focused on achieving full surface coverage of the workpiece with minimum inspection time are:
Li et al. (li2018path) demonstrated a robust CPP method for aerospace structures based on their geometric features. Path planning relied on the feature graph construction through Voronoi Diagram. Then, a search method is proposed to find this graph to decide the inspection sequence and a convex hull-based approach is applied to avoid collisions.
Glorieux et al. (glorieux2020coverage) presented a targetted waypoint sampling strategy with the shortest inspection time for dimensional quality inspection of sheet metal parts.
Liu et al. (liu2022coverage) developed an enhanced rapidly exploring random tree (RRT*) method and integrated the inspection errors and the optimal number of viewpoints into measurement cost evaluation for higher precision in quality inspection.
Huo et al. (huo2021sensor) applied the nearest neighbor search algorithm to find a near-shortest scanning path aiming at convex free-form specular surface inspection.
Despite numerous recent developments, CPP for free-form surface inspection remains an open research problem.
There are very few CPP solutions for line scanning robotic systems (Kapetanovic et al., kapetanovic2018side).
Compared with area-scan sensors, a line-scanning sensor is more suitable for defect inspection in industrial/manufacturing applications due to higher spatial resolution and lower production costs (Steger & Ulrich, steger2021camera; Wang et al., wang2022new).
Unlike a common area camera or other optical sensors that only work at some discrete positions, a line scanner utilizes only single beam scanning light to detect 3D objects when capturing images, and it needs to move continuously using a robotics manipulator along the coverage path. These features lead to many traditional CPP methods being ineffective. Therefore, developing a novel CPP method for the automatic line scanning system becomes imperative and advantageous.
This paper aims to overcome the limitations of existing CPP methods for surface defect inspection. We focus on defect detection for free-form surfaces of 3C workpieces based on a robotic line scanning system.
This robotic system utilizes a 6-DOF robot manipulator with a line scanner to finish a full-coverage inspection path and a depth sensor to localize the workpiece.
The proposed CPP method for robotics line scanning inspection consists of two parts, local path definition for accurate defect inspection and global time optimization for minimum scanning path.
It incorporates the detailed requirements of 3C components surface inspection and the specific characteristics of a robotic line scanning system.
The main contribution of this paper includes:
* A new region segmentation method and an adaptive region-of-interest (ROI) algorithm to define the local scanning paths for free-form surfaces.
* A Particle Swarm Optimization (PSO)-based global inspection path generation method to minimize the inspection time.
* Detailed simulations, experiments, and comparisons to validate the proposed method.
The rest of this article is organized as follows.
Section “sec:ccp_for_inspection" describes the path planning problem for 3C component surface detection.
Section “sec:methods" presented the proposed CPP approach in detail. Section “sec:results" shows the specific simulations, experiments, and comparisons on 3C components to validate the method's feasibility.
Finally, Section “sec:conclusion" concludes this article and discusses the limitations and future direction.
§ COVERAGE PATH PLANNING FOR INSPECTION
The CPP problem can be divided into two subproblems: 1) the local path definition is to generate view regions and partial scanning paths to meet the precise scanning and full coverage for 3C free-form workpieces. 2) global path planning aims to find an optimal or near-optimal sequence of all local paths (Gerbino et al., gerbino2016influence).
The key to the first sub-problem determines the position and orientation of each pair of viewpoints at both ends of local paths (the path between two consecutive viewpoints). The line-scan camera only captures an image line of pixels at a time, so the relative motion perpendicular to the line of pixels between the camera and object is necessary for 2D image acquisition during the defect inspection task (see Fig. <ref>). In this automatic scanning system, the camera is moved with a robotics manipulator along the stationary object, and the direction of depth of view (DOV) of the camera should be perpendicular to the scanned region to ensure image quality. Therefore, the scanned area needs to keep as flat as possible even if models of workpieces include many different geometric features (see Fig. <ref>). In addition, each local path consists of two viewpoints at both ends of it, and the camera at the robotic end-effector could scan one viewpoint to another to inspect the surface defects of the regions corresponding to this local path. The change in the position of these two waypoints is required to be along one regular direction, whose orientations need to remain as unchanged as possible to ensure the quality of acquired images. Besides, this sub-problem is also affected by some critical factors, such as field of view (FOV) and DOV (Liu et al., liu2022coverage).
The global path planning problem is concerned with finding the sequence and path connecting the selected viewpoints to minimize the total travel cost. This generated coverage path needs to reach all local paths with the shortest connection path. In other words, the objective is to find the minimum kinematic feasible path for the robot manipulator to target the scanning sensor at each viewpoint precisely through all local paths, without colliding with any obstacles in the workspace.
This proposed method should provide a feasible coverage path that transverses all the local paths with minimum inspection time efficiently and automatically. Moreover, it needs to consider diverse measurement directions of local paths to ensure high detection precision. Generally, there are many local paths to evaluate the surface quality of the 3C components. To obtain precise defect original images, every scanning parameter is significant and could be set according to one new automatic method rather than the workers’ experience and opinion.
§ METHODOLOGY
A CPP generation and optimization approach is presented based on the robotics line scanning system(see Fig. <ref>). This includes i) a new hybrid region segmentation method based on the random sample consensus (RANSAC) and K-means clustering method; ii) an adaptive ROI method to define the local measurement paths; and iii) one PSO-based global optimization approach for the minimum inspection time. This optimal path is then implemented for offline programming and surface detection, thereby improving the efficiency of the inspection of 3C components.
To exact the workpiece's geometry features, the 3D model is converted to a point cloud. The sampling procedure is based on selecting a series of points randomly and uniformly from the model to form a point cloud that can be used to segment and process all surfaces of the workpiece. The acquired point cloud O consists of points p_i=[x_i,y_i,z_i], i=1,2,..., m (m is the total sampling number of O), which preserves the geometric information of all faces.
§.§ Hybrid region segmentation based on RANSAC and K-means clustering
The image acquisition characteristics of line-scan cameras necessitate the preservation of flat scanning areas to ensure optimal image quality. Therefore, it becomes crucial to employ an effective segmentation method to divide the entire surface into flat regions. In this study, we propose a hybrid region segmentation method specifically designed for the surface features of 3C components. This method leverages the RANSAC method and enhanced K-means clustering to achieve accurate segmentation. The RANSAC method is used to detect a region with planar geometry. It can also remove some points with minimum curvature from the entire point cloud, enhancing the computation speed of the whole procedure (Su et al., su2022building). Furthermore, it can effectively remove outliers, thereby improving the accuracy of the subsequent K-means clustering process.
Here, we use RANSAC to partition O first. It includes two steps: producing an assumption by random samples and proving this assumption with the remaining data. Given different hypothesis geometrical models, RANSAC can identify planes, spheres, cylinders, and cones (Xu et al., xu2015investigation). Since the flat regions are required for precise line scanning, RANSAC utilized the equation of a plane as a feature model in the proposed system. It selects N sample points of O and estimates the plane model parameters by those sample points. The position of a point is selected as an inlier if the distance between the point and plane is less than the fixed thresholds and the shape that contains the greatest number of outlier points could be split and extracted after multiple iterations. The plane model can be represented as
aX+bY+cZ+d=0
where [a,b,c,d]^T is the plane model parameter, and [X,Y,Z]^T denotes any point in the 3D coordinates.
This method can extract a nearly planar point cloud region C_0 when the best plane model has been identified. RANSAC does not require complex optimization or high memory resource so that we can obtain C_0 rapidly. However, the remaining point cloud O^r with the size η^r cannot be segmented clearly by this approach since O^r consists of bevels, curved surfaces, and other complex geometrical information.
The traditional K-means clustering methods regarded the region segmentation as a clustering analysis problem of surface geometric features. They applied the position and surface normals of the point cloud for segmentation, which are not appropriate for workpieces with large variations in curvature or many bevels and corners (Li et al., li2018leaf; Liu et al., liu2020method). Therefore, Some different factors should be considered to describe the features of the object. The enhanced K-means clustering is proposed in this paper to process O^r. In the standard K-means method, the number of clusters N dramatically affects the performance of this method, and many trials are required to find a near-optimal N in some classical methods (Juang & Wu, WOS:000290138700014). In this developed method, we apply not only the corresponding surface normals n_i^r=[n_ix^r,n_iy^r,n_iz^r] of the points in O^r but also the Gaussian curvature K_i^r and Mean curvature H_i^r of each point p_i^r in O^r as the inputs of the enhance K-means clustering. Besides, a feasible weighting factor ω among n_i^r, K_i^r, and H_i^r is determined through many manual experiments. K_i^r is the product of the principal curvatures of p_i^r, and it neutralizes the maximum and minimum curvatures. A positive Gaussian curvature value means the surface is locally either a summit or a valley, while a negative value illustrates the surface locally consists of saddle points. And zero Gaussian curvature indicates the surface is flat in at least one direction like a plane or cylinder (Li et al., li2019automated). In mathematics, the mean curvature of a surface presents the curvature of an inset surface in Euclidean space or other ambient spaces. The curvature of the point can be represented by c_i^r=[K_i^r,H_i^r]. With adding these two parameters in this enhanced K-means method, the clustering quality can be improved than before, so the geometric feature of the point of O^r is presented as I_i^r = [n_i^r,c_i^r]. Besides, we present a method to automatically adjust N since N affects the result of the classification, and the traditional techniques set one fixed N, whose drawback is its poor flexibility. The algorithm depends on a two-looped 1D search, with the inner loop for similarity comparison and the outer loop for iterating N. The iteration can end when the largest intra-class difference is smaller than a threshold T. The entire procedure of this enhanced K-means method is illustrated in Algorithm <ref>.
For the outer loop, we represent the feature vectors of the N-cluster set as
Q_j=[q_n,q_c]
q_n=[q_1,q_2,q_3]
q_c=[q_4,q_5]
Q_j is one 5-dimensional vector (j=1,2...,N). All of them can be initialized with a random value. Afterward, the procedure goes into the inner loop, composed of two steps: 1) similarity comparison and 2) updating. In the first step, cosine similarity is used in this proposed method for assessing the similarity between I_i^r and Q_j, which is considered as a measure of similarity between two sequences of numbers in data analysis (Kiricsci et al., kiricsci2022new). The similarity α _ij is described in detail as follows:
α _ij =ω _1cos(n_i^r · q_n / | n_i^r | · | q_n | )+ω _2cos(c_i^r · q_c / | c_i^r | · | q_c | )
where ω _1 and ω _2 are the weighting factors for α _ij, and they are set as 0.6 and 0.4 respectively in this method according to many experiments.
Then, this method should find the cluster C_j with the smallest α _ij and exact the corresponding p_i^r and I_i^r to it. The next step is to determine whether the classification has met the termination condition. For each cluster C_j, the termination parameter λ _j is calculated from the maximum intra-class difference D_j as:
λ _j=
0, D_j>T
1,else
;
D_j= max_iα _ij
β _t represents the sum of λ _j from every region C_j at this iteration t .If β _t = N, the current segmentation is satisfactory and the algorithm can finish iteration. Otherwise, the procedure continues. In this stage, the search direction should be considered since the method includes two loops, the inner one that compares similarity and clusters concerning N and the outer one that increases the value of
N gradually. The change relies on the performance of β _t. If the performance deteriorates at the iteration step t (i.e. β _t is smaller than β _t-1),
the inner loop must stop immediately and a new outer loop starts with N←N+1 because the current N is not ideal. If the performance is better(i.e. β _t is larger than β _t-1), the search within the inner loop continues.
Before switching to the next inner iteration, all feature vector Q_j=[q_n,q_c] are updated to improve the representation level:
q_n= 1/η _j∑_i=1^η _j n_ij/1/η _j∑_i=1^η _j n_ij
q_c= 1/η _j∑_i=1^η _j c_ij/1/η _j∑_i=1^η _j c_ij
where n_ij, c_ij and η_j are i-th normal feature vector in C_j, curvature feature vector in C_j and the size of the C_j separately.
The proposed algorithm only takes the limited features of the region C_j into consideration, which can lead to a high sparsity of the clustered points within the same region. Therefore, Euclidean cluster extraction is implemented as a post-processing step to verify if it is necessary to subdivide the region C_j into two new regions according to the location of the points in it.
§.§ Adaptive ROI Based Path Planning
The local paths are generated according to the proposed planning method, which takes the segmented region C_j as input. Due to the synchronization of the scanning inspection of the line camera and the robot's motion, every viewpoint in these local paths should be produced through a feasible method for accurate detection, and all local paths are required to cover the whole region C_j of the workpiece. Hence, this part presents an adaptive ROI method for generating local paths that aim to adapt scan paths and viewpoints to the various shapes of objects
Since the scanning sensor captures a horizontal line image, the scanning coverage can be thought of as a cuboid when the system is moving linearly, which contains the DOV V_D, the FOV V_F, and the moving direction V_L(see Fig. <ref>). Besides, the key of this approach is to determine the position μ =[x,y,z] and pose i=[d⃗,l⃗] of the viewpoints (v^p,v^p*) at both ends of a local path G_t,t=1,2,...,U. The pose i is described by the direction d⃗ of V_D and the direction l⃗ of V_L.
To make the geometric scanning model effective and keep the accuracy of this system, our algorithm further segments every C_j into 3 sub-regions W_jf, f=1,2,3. Due to the irregular shape of each C_j, we stipulate that the C_j is divided into 3 sub-region W_jf evenly following the direction k⃗ of the longest length of each C_j and the scanning motion is also along k⃗ for every area ( l⃗=k⃗). In addition, we define that d⃗ is the reverse direction of the surface normal w⃗_⃗j⃗f⃗ of this W_jf ( d⃗=-w⃗_⃗j⃗f⃗).
Thus, the corresponding μ_1,μ_2 are located on :
μ = τ-w⃗_⃗j⃗f⃗· |V_D|
The center of the sub-region W_jf is regarded as c_jf=[c_x,c_y,c_z], and the intersections τ_1,τ_2 of the W_jf 's edge and the line k⃗· c_jf are deemed as the inspection points of viewpoints v^p,v^p* at both ends of a local path G_t on this sub-region surface. |V_D| is the magnitude of V_D.
§.§ PSO-based global path optimization
Based on the local path definition in the previous step, we need to find an optimal sequence of all local paths to generate a complete scanning path for the whole free-form workpiece surface. We should consider how to minimize the total robot's motion time under a constant velocity of the sensor during the inspection task. According to the requirements in practice, the robotics manipulator should complete the scanning inspection task through all pre-defined viewpoints. This sequence optimization problem can be regarded as Traveling Salesman Problem (TSP) to obtain a path with the shortest time (Claro et al., claro2023energy). The TSP is one integrated optimization problem and nondeterministic polynomial time (NP)-hard. The problem of global path planning can be formulated
min{∑_t=1^U∑_s=1^U-1 T_t^scanning+T_s^across}
where T_t^scanning is the cost time of passing every local path G_t, T_s^across means the cost time from G_t to G_t+1 and U represents the total number of local paths. The cost time in the context of the robot manipulator's end-effector is determined by the straight-line distance between two viewpoints, considering the constant speed of movement. In contrast to the general TSP, our scenario requires sequential traversal of adjacent viewpoints within the same local path to ensure optimal inspection performance. This constraint is imposed due to the limitations of region segmentation and the necessity for adaptive ROI local path definition. The limitation can be summarized as
T_t^scanning(G_t)=
T(v_t^p→ v_t^p*)
T(v_t^p*→ v_t^p)
T_s^across(G_t,G_t+1)=
T(v_t^p→ v_t+1^p)
T(v_t^p→ v_t+1^p*)
T(v_t^p*→ v_t+1^p*)
T(v_t^p*→ v_t+1^p)
The prior studies on this problem include branch and bound linear programming, and dynamic programming methods (Shang et al., shang2020co; Xu et al., xu2022path). However, with the increasing number of targets, the computation of a feasible path becomes exponentially more difficult, and obtaining the global optimal solution becomes more challenging. Different heuristic algorithms have been developed for TSP, including Simulated Annealing, Genetic Algorithm, Ant Colony Optimization, A* algorithm, etc (Abualigah & Diabat, abualigah2022improved); Ghali et al., ghali2023genetic). In the proposed method, the PSO-based method is used to solve TSP with the advantage of general flexibility in TSP solving. After selecting the shortest path, the optimal general path sequence can be acquired in this step.
In PSO (Karim et al., karim2021hovering), a swarm of particles are used to describe the possible solutions. Every particle ξ is related to two vectors in D-dimension space, i.e.,
the velocity vector V_ξ=[V_ξ^1,V_ξ^2,...,V_ξ^D] and the position vector X_ξ=[X_ξ^1,X_ξ^2,...,X_ξ^D].Both of them are initialized by random vectors. During the PSO process, the velocity and position of particle ξ on dimension d are updated as (Zhan et al., zhan2009adaptive):
V_ξ^d= ω V_ξ^d+c_1rand_1^d(pBest_ξ-X_ξ^d)
+ c_2rand_2^d(gBest-X_ξ^d)
X_ξ^d= X_ξ^d+V_ξ^d
where ω represents the inertia weight, and c_1 and c_2 are random numbers within [0,1]. pBest_ξ is the position with the best fitness value for the ξth particle and gBest is the best position in the global.
The main steps of PSO are:
* Initialize all particles, including their velocity and position.
* Establish the fitness function and calculate the fitness value of each particle,
* Update the pBest_ξ and gBest.
* Update the velocity and position of each particle according to (10) and (11).
* Increase the number of iterations, Go to step 3 and repeat until the termination condition.
§ CASE STUDY
To illustrate the performance of the proposed method, we provide two case studies for simulation tests (Case 1: a camera lens, Case 2: a Computer fan) and two case studies for experimental evaluation (Case 3: a tablet back cover, Case 4: upper part of computer mouse) on 3C component surface inspection. A state-of-the-art CPP method is also used for comparison with the developed method in “ssec:case_study".
§.§ Case study setup
Fig. <ref> shows the experimental setup for evaluating the proposed methods.
A custom-made end-effector housed the defect inspection system consisting of a line scanning sensor (Hikvision MV-CL041-70GM camera) and a uniform line illumination source (TSD-LSH230200-B from TSD company).
The Intel RealSense L515 LiDAR camera was mounted on the top of the workspace to capture the real-time stream of point clouds.
The pose of the workpiece was estimated using the point clouds from LiDAR.
An analog control box with a high-power strobe ensures an adjustable and stable voltage for the light source.
The system consisted of a UR5 manipulator from Universal Robots to manipulate the end-effector in order to scan the workpiece automatically.
The entire automated line scanning framework is based on ROSon Linux PC, which can simultaneously monitor the sensors (line scanner, depth sensor) and control the actuator (manipulator).
The line velocity and acceleration of the manipulator's end-effector were empirically set to 0.05 m/s and 0.5m/s^2, respectively.
During trajectory execution, the robot manipulator followed a constant line speed to maintain consistency of image acquisition (the acquisition line rate of the scanner is 3000 line/s).
Table <ref> summarizes the other parameters for the line scanning system used for the experiment.
§.§ Path generation and defect inspection
Fig. <ref> presents four 3C component models.
Each 3D mesh model (or CAD model) was converted into a point cloud to identify the geometrical features through uniform and random sampling (Arias-Castro et al., WOS:000237574800012), as shown in Fig. <ref>.
Some geometrical features, such as surface normals, Gaussian curvature, and mean curvature, are computed by a point cloud processing software named CloudCompare (Tang et al., 10081460).
Then, the point cloud was inputted into the proposed method for estimating the scanning path.
The similarity threshold T should be selected before region segmentation.
If T is large, the segmentation process needs more computation time to cluster the point cloud, which could reduce the overall clustering efficiency.
On the contrary, a smaller value of T groups the different features into the same cluster C_j, which degrades the segmentation accuracy.
Consequently, selecting this component must balance the segmentation accuracy and calculation efficiency.
0.64 is an optimal value for T, found by hit and trials.
The results from the hybrid segmentation method are shown in Fig. <ref>, where the different colors indicate various segmented regions (or clusters).
Here, the methods used RANSAC to cluster the plane region.
In Case 3 and Case 4, a significant portion of the planar/near-planar region has been grouped in one cluster, as shown in Fig. <ref>(c).
Initial clustering using RANSAC significantly reduces the processing time.
After the hybrid unsupervised region segmentation, the surfaces with similar geometric features were clustered together.
Fig. <ref> shows the four geometrically diverse workpieces, and each is divided into different regions based on the features.
Some segmentation errors will remain due to the uncertain nature of computing features, but if they do not affect the scanning path generation.
With adaptive ROI-based path planning and PSO-based global path generation, a complete and near-optimal inspection path can be produced, which is visualized in Fig. <ref>. The number of viewpoints is 48, 48, 42, and 30 in Case 1-4 respectively, displayed by the frames. They show the pose of the robot's end-effector during the inspection task. The global path planning is demoted with a black line and every segmentated region has a corresponding local path. The different viewpoints are connected by straight lines in the optimal sequence. The robotics motion should follow this detection path to achieve full object coverage.
We input the inspection paths to the automatic line scanning system to scan the tablet back cover and upper part of computer mouse in order to mimic the real defect inspection, as illustrated in Fig. <ref>.
Fig. <ref> illustrates the surface defects of these two objects.
Since the segmented results have similar geometric features, and the feasible viewpoints can be selected by the ROI-based method based on the parameters of the line-scan camera, surface defects can be acquired clearly, even where the defects are easy to ignore for a human eye, like corners and curved surfaces.
The proposed method can effectively conduct region segmentation, local path planning, and global path optimization, enabling precise surface defect inspection and further process optimization for the 3C industry.
§.§ Comparative analysis and verification
To further validate the proposed CPP method, a cutting-edge line scanning CPP method (Huo et al., huo2021sensor), a convex specular surface inspection method, is applied as a benchmark approach for comparative analysis. In this method, the traditional K-means clustering method is used for region segmentation and they produced the final path through a local optimization method, nearest neighbor search (Aryal et al., arya1998optimal).
There are five comparison criteria: region segmentation time, total number of viewpoints, length of the global inspection path, total inspection time, and surface defect detection rate. Segmentation time was used as a measure of efficiency for region segmentation methods. The inspection path length and total detection time served as indicators of overall path efficiency in CPP methods. The surface defect detection rate provided insights into the actual effectiveness of defect acquisition, reflecting the accuracy of region segmentation and the quality of path planning. Additionally, when defect results or coverage rates were similar, preference was given to the CPP method that generated fewer viewpoints as it was considered a more viable path planning approach (Liu et al., liu2020optimal).
The comparison results are shown in Fig. <ref>. For region segmentation time, the proposed time used less time to finish this procedure. Due to the usage of RANSAC and more geometric features, the proposed method can obtain the subregions with planar/near-planar geometry efficiently. As for the viewpoints, our developed approach produces fewer viewpoints since more accurate region segmentation results and concise ROI generation. Conversely, the convex specular surface inspection method employed a more complex iteration process for viewpoint determination, as it struggled to precisely segment objects with intricate geometries. When comparing inspection path length and time, our method outperformed the benchmark approach. While the benchmark utilized a local optimization solution, namely nearest neighbor search, it fell short in generating a feasible global inspection path for CPP. In contrast, our PSO-based method effectively addressed the TSP with reasonable optimization goals and feasible viewpoints. Although our approach is slightly better than its surface defect detection rate, the presented method can finish the inspection task with less time and shorter paths. Based on this comprehensive comparison, our proposed CPP method stands as a superior choice over the state-of-the-art line scanning inspection method. Consequently, the proposed method presents a valuable and feasible solution for CPP in surface defect inspection.
§ CONCLUSION
This paper proposes a systematic framework for an inspection CPP method for 3C component surfaces. According to this framework, a high-resolution line scanning sensor, mounted on a multi-DOF robotic manipulator, can execute surface scanning and detection precisely and flexibly. The developed methodology includes (1) a new hybrid region segmentation method based on the RANSAC and K-means clustering method; (2) an adaptive ROI method to define the local measurement paths; and (3) a PSO-based global optimization approach for the minimum inspection time. Four case studies verify the effectiveness and efficiency of this method. The results show it outperforms the state-of-the-art line scanning CPP method according to comparison. Overall, the proposed method can achieve precise and efficient surface inspection for 3C free-from components. It can be applied in the 3C industry and be extended to inspect other structures such as auto spare parts and industry-standard components.
However, it should be noted that the proposed method may encounter challenges when applied to workpieces with complex structures, making it less suitable for parts with intricate shapes. Future research should focus on optimizing the design of the system end-effector to enhance the flexibility of the inspection framework. Additionally, exploring mathematical methods for optimal path planning and investigating the potential of information theory and deep learning techniques, such as convolutional neural networks, could further improve the effectiveness of the segmentation method.
*Supplementary information
The following video demonstrates the performance of the proposed method with simulations and experiments: https://vimeo.com/842785212 https://vimeo.com/842785212.
*Funding This work was supported by the grant from Shanghai Microintelligence Technology Co. Ltd (No. P21-0078).
§ DECLARATIONS
*Competing interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
*Data availability statement
The data underlying this article will be shared on reasonable request to the corresponding author.
*Authors' contribution
Hongpeng Chen: Conceptualization, Methodology, Software, Validation, Writing – original draft. Shengzeng Huo: Software, Validation, Writing – review & editing. Muhammad Muddassir: Conceptualization, Validation, Writing – review & editing. Hoi-Yin Lee: Video Making, Validation, Writing – review & editing. Anqing Duan: Methodology, Data curation, Writing – review & editing. Pai zheng: Supervision, Resources, Conceptualization. Hongsheng Pan: Resources, Funding acquisition, Writing – review & editing. David Navarro-Alarcon: Supervision, Resources, Conceptualization, Methodology, Funding acquisition, Writing – review & editing.
|
http://arxiv.org/abs/2307.04776v1 | 20230709142916 | A new Machine Learning-based method for identification of time-correlated events at tagged photon facilities | [
"V. Sokhoyan",
"E. Mornacchi"
] | physics.data-an | [
"physics.data-an",
"nucl-ex"
] |
Predictive Coding for Animation-Based Video Compression
Morgane Austern
Received: date / Accepted: date
=======================================================
§ INTRODUCTION
The accuracy of the measurements performed at electron accelerators with bremsstrahlung-based tagged photons is limited, among other factors, by the presence of the time-uncorrelated background in the tagging system at high rates, complicating unambiguous matching of the initial electron and the bremsstrahlung photon inducing the reaction in the target. The widely used sampling and subtraction of time-correlated events leads to a reduction in the measurement accuracy, depending on the degree of contamination of the data sample with time-uncorrelated background.
The method presented in this paper allows for the identification of the time-correlated events without subtraction of the random (uncorrelated) background.
To illustrate the application of the method, we have chosen the reaction γ p → p π^0 for a selected photon beam energy range of 240-260 MeV, where no other reactions are expected to contribute to the sample with two photons and one proton candidate in the final state.
In addition, kinematic cuts were applied for reaction identification and background suppression (see section <ref>).
After the initial event selection, the time-correlated (signal) events in the experimental data are expected to have the properties of a simulated γ p → p π^0 reaction, under the assumption that the simulation reasonably describes the data.
Afterwards, the samples obtained from Monte Carlo simulation of the signal and experimental measurement of the time-uncorrelated (random) background are used to create Machine Learning (ML) models, which are applied to distinguish between time-correlated and uncorrelated events in the experimental data.
This paper is organized as follows. Section <ref> introduces the experimental setup and the conventional method for the subtraction of the random background coincidences in the tagging system.
Section <ref> explains the concept of the new method, while the application of the ML models, including the comparison of the new method with the conventional subtraction approach, is discussed in section <ref>.
The outcome of this work is summarized in section <ref>.
§ SELECTION OF TIME-CORRELATED HITS WITH A CONVENTIONAL APPROACH
In this paper, the application of a new method for the identification of time-correlated events with ML-based models is illustrated for experimental data obtained with the Crystal Ball/TAPS setup at the Mainz Microtron (MAMI) <cit.>. Figure <ref> shows a schematic picture of the Crystal Ball/TAPS experiment. The photon beam is produced via bremsstrahlung by the electron beam from MAMI on a thin radiator. The outgoing electrons are bent by a dipole magnet and detected by the tagger spectrometer <cit.>, while the remaining part of the electron beam is directed to the electron beam dump. The energy of the bremsstrahlung photons is determined as the difference between the energy of the electron beam and the energy of the deflected electrons, measured by the tagger. The current setup of the tagger includes 408 channels, divided into 51 modules. Each channel is composed of a plastic scintillator (EJ200) rod, 30 mm long with a 6 × 6 mm base, read out by a 6 × 6 mm SensL-SiPM with a bias voltage of 25 mV. The signal is then guided outside the region with intense radiation by long Ethernet cables and is fed to a Constant Fraction Discriminator (CFD). This configuration ensures a single-counter time resolution of δ t = 0.1 ns. The bremsstrahlung photons can be tagged at 4.3% to 93.0% of the incoming electron beam energy E_e. The energy resolution relative to E_e varies over the energy spectrum, from low to high photon energies, from 0.4% to 0.11%, and the absolute energy resolution varies from 3.47 MeV to 1.03 MeV respectively <cit.>. The resulting photon beam (after collimation) impinges on a 10 cm-long LH_2 target and the particles in the final state are detected by the nearly 4π Crystal Ball/TAPS detector system consisting of the Crystal Ball (CB) <cit.> and TAPS <cit.> calorimeters (and other charged particle detectors), covering ≈ 97% of the solid angle.
Additional information on the apparatus can be found in refs. <cit.>.
Due to the relatively high electron beam current, the number of hits in the tagger associated with the same event in the CB/TAPS setup can reach large numbers (typically up to ≈ 140 electrons in the same trigger window), making it difficult to correlate the event with the correct hit in in the tagger.
In a conventional approach, this is achieved by calculating the time difference Δ t between each hit in the tagger and the event in the CB/TAPS systems.
This allows for the selection of the time-coincident events with a subsequent sampling and subtraction of the remaining time-uncorrelated background.
The left panel of figure <ref> shows a sample of the time difference Δ t distribution between the reconstructed π^0 in the calorimeters and each hit in the tagger. The so-called prompt peak around 0 ns corresponding to the coincident hits is well visible. The flat background outside and below the peak instead corresponds to the time-uncorrelated events. The spectrum is generated for the reaction γ p → p π^0 at a photon beam energy of 240 – 260 MeV after the application of the kinematic cuts described in subsection <ref>, used to reduce the initial background contamination of the data.
The prompt time-correlated events are selected in the peak region within an interval Δ t ∈ [-2,2] ns (highlighted in blue in the left panel of figure <ref>). To remove the uncorrelated background in the prompt region, the random contribution is modeled by selecting two time windows, one on the left Δ t_l∈ [-200,-50] ns, and one on the right Δ t_r∈ [50,600] ns of the peak (both highlighted in gray in the left panel of figure <ref>).
The obtained background sample is normalized according to the width of the selected time windows, and then subtracted from the prompt peak sample.
To illustrate the performance of this method, the missing mass distribution was calculated as
M_miss = √((E_γ + m_p - E_γ')^2 - (k⃗ - k⃗')^2 ),
where k = (E_γ, k⃗) and k' = (E_γ', k⃗'⃗) are the incoming and scattered photon four-momenta, respectively, and m_p is the target proton mass at rest.
For a γ p → p π^0 event, M_miss is expected to be in agreement with the proton mass. The obtained distribution is shown in the right panel of figure <ref>, for the prompt and the random sample in blue and gray, respectively, while the distribution obtained after subtracting the uncorrelated events is shown in red. As expected, the shoulder on the left of the total distribution (blue) is well described by the uncorrelated background, resulting in a final distribution well peaked around the proton mass after random background subtraction.
Although this method has been widely used with reliable performance at photon tagging facilities for the past few decades, the need to subtract the random background limits the achievable measurement accuracy, especially for low-energy measurements of processes with small cross sections. In addition, the information about all correlations between variables for a single event is lost in the subtraction process (unless the subtraction is performed in multiple dimensions). Thus, a new method allowing for unambiguous matching of the time-correlated tagger hit with the event in the calorimeter without the need for sampling and subtraction of the time-uncorrelated background is highly desirable, especially now that we are entering the so-called precision era of nuclear and hadron physics. This paper presents a new ML-based multivariate analysis method, applicable for the selection of time-correlated hits without subtraction of the time-uncorrelated background.
§ NEW ANALYSIS CONCEPT
The goal of this work is to distinguish between signal and background events in the region of the prompt peak, shown in the left panel of figure <ref>.
In the newly developed method, ML-based models are created and trained using a simulated data sample with a signature of the reaction of interest in combination with the experimentally measured uncorrelated background[The simulation of the random background is extremely challenging due to multiple non-linear effects in the experiment.]. The prerequisite for using this method is that the sample of the (potentially) time-correlated events shows patterns similar to the simulated sample. This can be verified, e.g., by comparison of the simulated distributions with the distributions obtained from the experimental data after conventional random background subtraction described in section <ref>. At the same time, the patterns in the simulated data should be significantly different from the patterns in the random background sample. The selection of input variables (input features for ML models) meeting these criteria allows us to create ML models, trained to distinguish between signal and background events. The performance of the created ML models is tested at first on a data sample consisting of simulated signal events and experimentally measured random background by comparing the initial and predicted labels for both classes of events. After the initial tests, the created ML model is used for the separation of experimentally measured signal and background events in the region of the prompt peak of the time spectrum (discussed in section <ref> and shown in figure <ref>). Finally, the performance of the ML-based models is compared with the conventional random background subtraction method.
§ APPLICATION OF THE MACHINE LEARNING-BASED METHOD
This section illustrates the application of the developed method for the reaction γ p → p π^0 measured with the Crystal Ball/TAPS setup at MAMI for the incoming photon energy range of 240 – 260 MeV. The simulated Monte Carlo sample, generated with the GEANT4-based <cit.> package A2Geant4 <cit.> used to create ML models, consists of 10^6 γ p → p π^0 events at E_γ = 240 - 260 MeV[At these energies, there is a significant amount of time-uncorrelated background remaining after the application of routinely used kinematic cuts such as the invariant mass cut, the coplanarity cut, and the opening angle cut.]. The second component used to create ML-based models is the experimentally measured random background sample, described in section <ref>.
§.§ Preparation of the input data
To select the γ p → p π^0 reaction, the following cuts were applied both to the simulated signal and the measured random background. At first, the events with two neutral particles (photon candidates) and one charged particle (proton candidate) were retained. The invariant mass of the two neutral particles had to agree with the nominal π^0 mass within 15 MeV. The difference between the azimuthal angles of the charged particle (proton candidate) and the π^0 (reconstructed from the neutral particles) had to fulfill the condition |ϕ_p - ϕ_π0| = 180^∘± 10^∘. The measured polar angle of the proton candidate was matched to the polar angle of the "missing particle" (calculated from the photons in the final and initial states) within 5^∘. Under these conditions, the events with the signature of the reaction γ p → p π^0 can be clearly identified.
After the event selection described above, an ML model was built using five selected input variables (features). Figure <ref> shows the comparison between simulated γ p → p π^0 sample (green) and experimentally measured time-uncorrelated background (gray) at E_γ = 240 - 260 MeV. These signal and background samples were used to build an ML-based model trained to distinguish between these two classes of events. In addition to the missing mass, the input variables (features) are the polar angle of π^0 (θ_π^0), the z-component of the missing momentum (P_z (miss)), the difference between the energy sum measured for the initial and final states (E_i - E_f), and the invariant mass of the pπ^0 pair (M_pπ^0).
§.§ Application of ensemble learning with boosted decision trees using CatBoost
The ML algorithm used in this work relies on ensemble learning with gradient boosting for decision trees, where the errors are reduced at each learning step based on the previous step. Generally, gradient boosting algorithms are well suited for solving classification and regression tasks with tabular data. We used the package CatBoost, which is one of the state-of-the-art gradient boosting algorithms, and utilizes symmetric decision trees <cit.>. The CatBoost-based models were created using the data sample shown in figure <ref> as an input.
To introduce a realistic ratio for the number of correlated and uncorrelated events in the region of the prompt peak, the ratio of simulated events to random background was chosen to match the ratio of random background to signal events in the prompt peak region (determined from the spectrum shown in figure <ref>). The simulated sample and the measured random background were mixed according to this ratio. Then, the data were randomly reshuffled and split in two parts. The first part containing 2/3 of the data was used to train the ML models, while the remaining 1/3 was used to test the model performance. The models showed high performance with the default settings provided by CatBoost, and were additionally improved by tuning the hyperparameters of the model [The following hyperparameters were tuned with the so-called Random Search method: number of iterations, learning rate, bagging temperature, random strength, and L2 regularization (for details on hyperparameters see ref. <cit.>).].
The created models were at first used to separate the events simulated with GEANT4 from the random background.
Figure <ref> shows the correlations of the polar angle of the π^0 vs. the missing mass (top) and the invariant mass of the proton and pion (bottom) for both the training sample (left panels) and for the evaluation one as predicted by CatBoost-based model (right panels). Part of the background can be linearly separated, however a significant overlap between the two data sets is also present. The shapes of the time-correlated signal (green area in figure <ref>) are generally well-reproduced by the model. The corresponding values for the precision (number of true positives divided by the sum of true positives and false positives) and recall (number of true positives divided by the sum of true positives and false negatives) both for simulated sample and measured random background are summarized in table <ref> for different prompt peak cuts, corresponding to different levels of background contamination. The precision for the signal events varies between 97.5%, for the ratio of signal and background events at ≈ 48% and 95% when the amount of the background increases up to 111.5% (compared to the number of signal events). As described above, the expected ratios for the signal and background samples were determined for the prompt peak region of the time spectrum (see figure <ref>). In this case, the cut of 0 ± 2 ns allowed the inclusion of most of the events in the prompt peak, while the broader cuts were used to test the performance of the models in presence of different amounts of background[For each of the time cuts, a separate model was created, in order to take the ratios of background and signal into account in each case.].
Finally, the trained and evaluated model was used to distinguish between the time-correlated signal and uncorrelated background in the prompt peak (see figure <ref>). The performance of the Catboost-based model is compared with the conventional random background subtraction method in figure <ref> for the five variables used as input features for the model (shown in figure <ref>) and for the total energy of the neutral pion E_π^0, not used as an input for the model. The new and the conventional background handling methods show consistent results for most of the bins. The differences observed in the region corresponding to low missing mass (appearing in the other variables as well), are mainly due to the remaining differences between the simulation (used to build the model) and experimental data and can be improved by further fine-tuning of the simulation (dependent on the goals of the analysis). In the missing mass region above 925 MeV/c^2, the integrals of the distribution obtained with both methods agree on a sub-percent level.
It is important to note that the developed method is not only applicable for the variables used as input features to the model, but can also be used to predict distributions of other variables, which are correlated with the input features used to build the models, as can be seen
for the total energy of the neutral pion E_π^0 in the lower right panel of figure <ref>. Moreover, since the new method does not require subtraction of the random background, the resulting uncertainties are smaller compared to the uncertainties for the conventional method, even though the magnitude of the reduction depends strongly on the amount of background in the corresponding analysis.
In addition, the performance of the developed method and conventional background subtraction were compared at different levels of background contamination.
Figure <ref> shows the comparison between missing mass spectra corresponding to different prompt peak cut widths, resulting in different background contamination of the data. For each of these cases, corresponding to different prompt peak cuts, CatBoost-based models were built separately, taking into account the expected ratio of the correlated signal to uncorrelated background (dependent on the width of the selected time window). Generally, the comparison of the results obtained with the new ML-based approach with the conventional random background subtraction indicates stable model performance at significantly different levels of random background.
The differences at low missing mass values, as mentioned above, can be additionally suppressed (dependent on the goals of the analysis) by further reduction of differences between GEANT simulation and experimental data (used to build the ML-based models). The integrals shown in table <ref> for the new and conventional analysis methods are in agreement within ≈ 1% for the range missing mass range of 925 - 960 MeV/c^2.
§ SUMMARY
A newly developed Machine Learning-based method for the selection of the time-correlated signal at tagged photon facilities is presented. The application of this method allows to improve the precision of experiments, where the conventional sampling and subtraction of the uncorrelated background poses restrictions on the accuracy of the measurements. Moreover, the developed method allows to preserve the information about the correlations of the variables for individual events, in contrast to the standard subtraction method. The new method shows stable performance in handling data with different levels of background contamination. One of the future applications of this method will be the analysis of the Compton scattering data taken with hydrogen and light nuclear targets in order to improve the accuracy of the extraction of the scalar and spin polarizabilities of the nucleons.
The data used in this work were taken by the A2 Collaboration with
the Crystal Ball/TAPS setup at MAMI in March 2018.
99
bib:Jankowiak:2006
A. Jankowiak, The Mainz Microtron MAMI —Past and future, Eur. Phys. J. A 28S1, 149-160 (2006).
bib:Kaiser:2008
K. H. Kaiser et al., The 1.5 GeV harmonic double-sided microtron at Mainz University Nucl. Instrum. Meth. A 593, 159-170 (2008).
bib:Mornacchi:2021
E. Mornacchi, Ph.D. Thesis, University of Mainz,
http://doi.org/10.25358/openscience-6051doi:10.25358/openscience-6051 (2021).
bib:McGeorge:2007
J. C. McGeorge et al., Upgrade of the Glasgow photon tagging spectrometer for Mainz MAMI-C, Eur. Phys. J. A 37, 129-137 (2008).
bib:Nefkens:1995
B. M. K. Nefkens, The Crystal Ball Technical Report 1, UCLA (1995).
bib:Gabler:1994
A. Gabler et al.Response of TAPS to monochromatic photons with energies between 45 MeV and 790 MeV, Nucl. Instrum. Meth. A 346, 168–176 (1994).
bib:Novotny:1998
R. Novotny, Performance of the BaF-2 calorimeter TAPS, Nucl. Phys. B Proc. Suppl. 61, 137–142 (1998).
bib:Unverzagt:2008
M. Unverzagt et al., Determination of the Dalitz plot parameter α for the decay η→ 3π^0_with the Crystal Ball at MAMI-B, Eur. Phys. J. A 39, 169-177 (2009).
bib:Geant4
S. Agostinelli et al., Geant4 - A Simulation Toolkit, Nucl. Instrum. Meth. A 506, 250-303 (2003).
bib:A2Geant4
A2Geant4 - Official GitHub repository,
https://github.com/A2-Collaboration/A2Geant4github.com/A2-Collaboration/A2Geant4.
bib:catboost
CatBoost - open-source gradient boosting library,
https://catboost.ai/https://catboost.ai/.
bib:catboost:arx
Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika Dorogush, Andrey Gulin,
CatBoost: unbiased boosting with categorical features, arXiv:1706.09516 (2017).
|
http://arxiv.org/abs/2307.04391v1 | 20230710075459 | Vehicle Detection in 6G Systems with OTFS Modulation | [
"Pavel Karpovich",
"Tomasz P. Zielinski"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT",
"H.1.1"
] | emptyfancy
[t]1.0
Accepted for Konferencja Radiokomunikacji i Teleinformatyki KRiT-2023, Krakow 2023 (author's version)
VEHICLE DETECTION IN 6G SYSTEMS WITH OTFS MODULATION
Pavel Karpovich ^1,2;
Tomasz P. Zielinski^2;
[t]0.4
^1 Institute of Telecommunications AGH, Krakow mailto:[email protected],[email protected]
^2 Nokia Solutions and Networks, Krakow, mailto:[email protected]
2
The recently introduced orthogonal time frequency space modulation (OTFSM) is more robust to large narrow-band Doppler frequency shift than the orthogonal frequency division multiplexing (OFDM), used in the 5G standard. In this paper it is shown how the telecommunication OTFSM-based signal with random padding can be used with success in the 6G standard for detection of high-speed vehicles. Two approaches for detecting targets during the random padded OTFS based transmission are compared in the paper.
5G, 6G, OFDM, OTFSM, radar.
§ INTRODUCTION
In last few years, the scientific community attention has been focused on the discussion of next generation 6G communication. There are a lot of publications about what applications will drive the 6G network and what technologies should be included in the 6G standard to satisfy their requirements <cit.> <cit.>. Among large number of proposals, there are some that are most common, such as a terahertz wave and an integrated sensing and communication (ISAC) <cit.> <cit.>. This paper addresses a problem of adding a radar functionality to the communication systems of the future which will use higher frequency carriers and support high-mobility users.
The usage of terahertz band is challenging. Even relatively slow objects could generate very high Doppler frequency shifts. The strong Doppler effect limits the usage of the orthogonal frequency division multiplexing (OFDM) waveform which is at present de-facto a standard waveform in telecommunication systems (e.g. DVB-T2, Wi-Fi, LTE, 5G <cit.>). The OFDM is based on assumptions that linear convolution of the signal and the channel impulse response can be replaced by circular convolution, and that the channel impulse response is time-invariant or almost time-invariant. This allows to do a very fast and simple channel impulse response estimation. In case of the strong Doppler environment the assumption about constant channel impulse response is no longer valid since any channel coefficient can rotate in complex plane all the time due to the Doppler effect. Using OFDM in such conditions leads to errors in channel estimation and equalization, and eventually to inter-carrier-interference (ICI) and subsequently errors in bit detection.
Increasing sub-carrier spacing (SCS) in OFDM helps to deal with the strong Doppler frequency shift. However, this operation will increase also the OFDM cyclic prefix overhead and reduce transmission efficiency <cit.>. In order to eliminate the mentioned above disadvantage of the OFDM, the orthogonal time frequency and space (OTFS) modulation was recently introduced in <cit.>. Due to its unique features it is seriously treated as one of possible 6G waveforms <cit.>.
In this article simulation results for an ISAC system using the OTFS waveform are shown. We will start with the OTFS waveform description, present the delay-Doppler domain used in OTFS and discuss different pilot configurations exploited in it. Next, we will introduce the ISAC system using the OTFS waveform. Finally, in experimental part, we will show results from simulation of a radar part of the discussed RP-OTFS-based ISAC system.
In work <cit.> results from simulation of the communication part of the RP-OTFS transmission system were presented while this paper addresses simulation of the radar part of the system only. Practical verification of the general RP-OTFS based transmission and sensing concept was already presented in <cit.>.
§ ORTHOGONAL TIME FREQUENCY AND SPACE
The concept of the OTFS is shown in the figure <ref> <cit.> <cit.>. In comparison to OFDM, the OTFS is a two-dimensional modulation technique. In case of OTFS the modulation process looks as follows. At the beginning modulated IQ/QAM symbols are put into elements of the matrix A in figure <ref>, i.e. on the grid in a delay-Doppler (DD) domain. Then, the inverse Zak transform (inverse Fourier transform over the Doppler axis) <cit.> is used to transform (demodulate) data from the DD to a fast time - slow time (TT) domain. Finally, the obtained samples are reshaped from a matrix into a vector. The DD grid usage for data modulation makes the OTFS waveform attractive for ISAC since it is “native” domain for radars.
§.§ The delay-Doppler grid
Names of the DD grid directions reflect their physical sense. The delay direction (the first D in DD) consists of adjacent samples from time domain, the Δ t between samples is small. This direction is suitable for detecting small time changes in a observed signal. For example, in multi-path propagation environment, the difference between paths is not very big and can be estimated in the delay direction of the DD grid. But the delay direction is not suitable for observation of long time processes like the Doppler effect because Doppler frequencies are usually very small and require more time for estimation. In the Doppler direction (the second D of DD) only every Mth sample from time domain is used and this allows to estimate long time signal changes using FFTs with small sizes.
Parameters of the DD grid should be chosen taking into account that the sent OTFS modulated waveform will be used, both, for digital data transmission and moving vehicles detection. As sources of multi-path reflections could be treated as “radar targets”, the OTFS DD grid should fulfill, both, telecommunication and radar requirements. The DD grid has two parameters: M — the number of samples in the delay (fast time) direction, and N — the number of samples in the Doppler direction. Looking at figure <ref>, we can say that in Doppler direction a signal is practically decimated by M. Hence taking into account the Nyquist theorem, the maximum Doppler offset that can be estimated using such DD grid is f_d max = ±f_s 2 M, where f_s is a sampling rate. Resolution in the Doppler direction depends on N: increasing N and keeping f_s and M constant will increase the FFT length and the Doppler resolution. The resolution in delay direction depends only on f_s. Choosing f_s, M, N and carrier frequency f_c one can optimize the OTFS-based radar and digital transmission.
For example, lets choose the DD grid parameters for radar detection of many moving cars (reflections from stationary objects are not interesting for us). For maximum car speed of 60 m/s, carrier frequency f_c = 52.6 GHz (the maximum carrier frequency for 5G FR2), sampling ratio f_s = 50 MHz and maximum M = 1190, we can assume that the maximum Doppler frequency shift is equal to 21 kHz. Then, by fixing M=1024 and changing N, one can get different resolution of velocity estimation, changing from 9 m/s (for N=8) to 0.1 m/s (N=512), where values of N=8 and N=512 are exemplary ones.
§.§ Pilots configurations
As OFDM, the OTFS uses pilots for estimation of a channel impulse response (CIR). Their configurations are different. Here we will discuss two types of pilot placement strategies, shown in figure <ref>: a zero-padded one (ZP-OTFS) and a random-padded one (RP-OTFS). In both configurations the DD matrix A is divided into two parts: the data zone and the pilots zone. Every carrier in the DD grid is assigned to the pilot or data zone only, not to both of them the same time.
§.§.§ ZP-OTFS
In the ZP-OTFS, the pilot has a form of a rectangular zone of the DD matrix A, shown in figure <ref>, which is filled with zeros and have only one non-zero carrier in its center. We will call this non-zero carrier a pilot pulse. In case of the ZP-OTFS, the length of the pilot zone in the delay direction is twice bigger than length of the channel impulse response. In the Doppler direction the pilot zone usually makes use of all cells, as shown in figure <ref>.
Due to zeros surrounding the pilot pulse, the channel estimation process becomes very simple in the ZP-OTFS. There is also no interference between pilot and data zones as well as no ZP-OTFS symbol interference. In case of ZP-OTFS the channel impulse response is estimated by division of every cell of the received pilot zone by the known, transmitted pilot pulse (some threshold should be introduced here in order not to neglect reflection free samples of the pilot zone). The main disadvantage of the ZP-OTFS is low energy efficiency, because the pilot zone is very sparse.
§.§.§ RP-OTFS
The recently introduced RP-OTFS <cit.> <cit.> is designed to correct deficiencies of the ZP-OTFS. Here the pilot zone is filled by short OFDM symbols, treated as pilots, with random data inside — see figure <ref>. In case of the ZP-OTFS, discussed earlier, the data zone is generated in the DD domain and transformed to fast-slow time domain by the inverse Zak transform. In turn, in case of the RP-OTFS, OFDM symbols of pilots are inserted directly into fast-slow time grid (without the inverse Zak transformation). Absence of zeros in the pilot zone increases signal-to-noise ratio (SNR) and causes that the RP-OTFS application is more efficient than the ZP-OTFS in CIR estimation what is very important for both for communication and radar.
The CIR estimation begins with conventional OFDM channel estimation with the only difference that we treat the whole OFDM symbol as a pilot. After that, when all CIR momentum estimates are found using all OFDM symbols (having transmitted and received pilots one can easily estimates CIR taps from them), we transform the matrix of CIR taps to the DD domain by the Zak transform, i.e. by performing FFT over the CIR matrix rows. Note, that in the RP-OTFS the Zak transform is performed upon CIR estimates, do not upon time samples of OFDM symbols which were used for CIR calculation.
There are two disadvantages of the RP-OTFS application. Firstly, the length of cyclic prefix (CP) of the OFDM-based pilot should be equal to the OFDM symbol length, i.e. it is long and the CP overhead reduces the achievable bit-rate. Secondly, we assume that the CIR is quasi time-invariant and, therefore, we can not use long OFDM pilots for very high frequency Doppler channels.
§ INTEGRATED SENSING AND COMMUNICATION (ISAC)
In case of ISAC <cit.><cit.>, usually, the communication processing is the same as in conventional system. In this paper we are concentrating our attention on peculiarity of RP-OTFS radar processing since efficiency of the RP-OTFS based communication sub-system has been already tested <cit.> <cit.>. Two approaches of target detection are analyzed: correlation-based and pilot-based.
The first correlation-based method origins from classical radar processing in which a cross ambiguity function is used <cit.>: transmitted, reference signal (known, re-modulated in the receiver or acquired by special reference antenna) is shifted in time and frequency and correlated with the received, surveillance signal. The problem of the correlation base radar approach is that usually it is hard to find weak signal reflections, coming from small, moving objects, on the background of strong signal reflections caused by buildings (the radar clutter problem) <cit.>.
In the second pilot-based approach of vehicle detection transmitted pilots, known in the receiver, are used to CIR estimation <cit.>. In case of reflections coming from moving vehicles some CIR taps are complex-value numbers that oscillates in time with frequency of Doppler frequency shift caused the reflecting object movement. Here, we treat radar targets as sources of multi-path propagation. By CIR analysis we can retrieve information about signal reflections and about reflecting objects.
The pilot based ISAC system requires non distorted CIR estimates for Doppler frequency shifts extraction. As mentioned in the introduction, high Doppler objects can not be detected by OFDM. This also limits application of pilot-based radars making use of OFDM-based pilots.
§ EXPERIMENTAL PART
In experimental part we simulated a radar performance of the discussed RP-OTFS-based ISAC system. Parameters of the applied OTFS-based signal was following: size of the grid in delay and doppler direction 64x256 (MxN), length of the pilot zone 16 (meaning of L is explained in fig. <ref>), modulatiotion 4-QAM, carrier frequency 4 GHz and bandwidth 20 MHz. In simulation we used different target velocities in order to test the system performance in different conditions.
Delay-Doppler (distance-velocity) radar maps for a target moving with velocity about 139 m/s (500 km/h), calculated for both tested radar approaches (correlation based and pilot based ones), are shown in figure <ref>. In both methods integration/observation time 100 milliseconds was used. Input signal had signal-to-noise ratio (SNR) equal to 0 decibels. In both cases one can clearly see sharp peaks in the delay-Doppler (distance-velocity) matrix which correspond to parameters of moving vehicles. However for CAF two additional lower peaks are visible which are generated by the CP of the pilot part of the RP-OTFS waveform. As in case of the pilot-based approach we eliminate CP from signal processing chain, such peaks are missing in DD map of this method. In case of correlation-based radar mean level of background side-lobes, surrounding the detection peak, is equal to about -30 decibels while for pilot-based radar -40 decibels.
In figures <ref> and <ref> processing gain charts for both discussed RP-OTFS-based radars are shown, i.e. expressed in decibels root mean square (RMS) value of the method noise floor (visible in figure <ref>) as a function of signal to noise ratio (SNR) of an input signal. Simulated maximum vehicle speed (v_m) was equal to 50 (13.9 m/s) albeit 500 km/h (139 m/s) and integration/observation time (T_i) was varying from 10 ms to 200 ms. In figure <ref> both tested RP-OTFS-based radars are compared: it is seen that the pilot-based version outperforms the correlation-based one in DD detection hight, i.e. in noise robustness.
§ DISCUSSION
The main limitation factor in case of the correlation based RP-OTFS radar is its high level of CAF side-lobes, resulting in significantly lower output SNR in comparison to the pilot-based radar. Figures <ref> and <ref> confirm quantitative conclusions which can be drown from figure <ref>.
As mentioned before, in the development of the discussed RP-OTFS-based ISAC system we have assumed that channel pulse response is quasi time-invariant in the pilot zone. In case of high-mobility Doppler channels this assumption is fulfilled only approximately. This fact will limit the maximum processing gain of the presented pilot based RP-OFTS radar. Consequences of this method drawback will increase for higher velocities as it is visible in fig. <ref>. The same effect will be observed also when the pilot zone length will be increased. Nevertheless, obtained results confirm that the pilot-based radar outperforms the correlation-based one in terms of noise robustness.
§ CONCLUSION
Two moving vehicles detection approaches based on the RP-OTFS ISAC system were compared in this paper. The main limitation factor of the correlation based radar method is high level of CAF side-lobes, apart from existence of two additional peaks in CAF which are caused by repetition of the pilot samples. Detection of targets with low radar cross section on the background of strong background signal, so called clutter, e.g. direct path signal, is very challenging here. Presence of many ghost peaks in the delay-Doppler (distance-velocity) map makes subsequent processing steps in this method very challenging.
In turn, the pilot based RP-OTFS radar is characterized by lower level of side-lobes in the delay-Doppler map and it does not have extra peaks caused by the repeating pilot samples. But this approach is sensitive to the quality of the channel impulse response estimation. In order to minimize error of the channel impulse response estimate, and in consequence error of the moving object detection, we need to keep pilot zone as short as possible.
99
6G_harsh
H. Tataria et al., "6G Wireless Systems: Vision, Requirements, Challenges, Insights, Opportunities," Proc. IEEE, vol. 109, no. 7, pp. 1166-1199, July 2021.
6G_vision
W. Saad, M. Bennis and M. Chen, "A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems," IEEE Network, vol. 34, no. 3, pp. 134-142, May/June 2020.
isac1
F. Liu et al., “Integrated Sensing and Communications: Toward Dual-Functional Wireless Networks for 6G and Beyond,” IEEE J. on Selected Areas in Comm., vol. 40, no. 6, pp. 1728-1767, June 2022.
isac2
Z. Wei et al., “Integrated Sensing and Communication Signals Towards 5G-A and 6G: A Survey,” IEEE Internet of Things Journal, early access, 2023.
ofdm_numerology
Josue Flores de Valgas, Jose F. Monserrat, Hüseyin Arslan, "Flexible Numerology in 5G NR: Interference Quantification and Proper Selection Depending on the Scenario", Mobile Information Systems, vol. 2021, Article ID 6651326, 9 pages, 2021.
otfs1
R. Hadani et al., "Orthogonal Time Frequency Space Modulation," 2017 IEEE Wireless Comm. and Networking Conf. (WCNC), San Francisco, CA, USA, 2017, pp. 1-6, 2017.
otfs2
Z. Wei at al., “Orthogonal Time-Frequency Space Modulation: A Promising Next-Generation Waveform,” IEEE Wireless Comm., vol. 28, iss. 4,
pp. 136-144, 2021.
my_rp1
P. Karpovich and T. P. Zielinski, "Random-Padded OTFS Modulation for Joint Communication and Radar/Sensing Systems," 2022 23rd Int. Radar Symp. (IRS), pp. 104-109, Gdansk 2022.
my_rp2
P. Karpovich et al., “Field Tests of a Random-Padded OTFSM Waveform in a Joint Sensing and Communication System,” IEEE ICC Int. Communications Conf., Rome 2023.
zak
H. Bolcskei and F. Hlawatsch, "Discrete Zak transforms, polyphase transforms, and applications," in IEEE Trans. on Signal Processing, vol. 45, no. 4, pp. 851-866, April 1997.
radar
M.A. Richards, “Fundamentals of Radar Signal Processing,” McGraw-Hill Education, 2014.
my_dvbt2
P. Karpovich et al., "Practical Results of Drone Detection by Passive Coherent DVB-T2 Radar," 21st Int. Radar Symp. (IRS), pp. 77-81, Warsaw 2020.
ofdm_base_radar
M. Braun et al., "Parametrization of joint OFDM-based radar and comm. systems for vehicular applications," 2009 IEEE Int. Symp. on Personal, Indoor & Mobile Radio Comm., pp. 3020-3024, Tokyo 2009.
|
http://arxiv.org/abs/2307.06297v1 | 20230712165113 | The dilemma of voluntary quarantine: insights from coupled dynamics of disease spreading and adaptive quarantine | [
"Simiao Shi",
"Xingru Chen"
] | physics.soc-ph | [
"physics.soc-ph",
"q-bio.PE"
] |
The dilemma of voluntary quarantine: insights from coupled dynamics of disease spreading and adaptive quarantine
1st Simiao Shi
School of Science
Beijing University of Posts and Telecommunications
Beijing, 100876, China
Key Laboratory of Mathematics and Information Networks
(Beijing University of Posts and Telecommunications)
Ministry of Education, China
[email protected]
2nd Xingru Chen
School of Science
Beijing University of Posts and Telecommunications
Beijing 100876, China
Key Laboratory of Mathematics and Information Networks
(Beijing University of Posts and Telecommunications)
Ministry of Education, China
[email protected]
August 12, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Compared with mandatory quarantine, voluntary quarantine offers individuals the liberty to decide on whether to isolate themselves or not in case of infection exposure, driven by their own sense of responsibility and concern for public health. To tackle this problem, we perform agent-based simulations to study the coupled dynamics of disease spreading and the adaptive behavior of quarantine. We show that voluntary quarantine can be an effective intervention measure to mitigate the transmission of the disease. Moreover, we discuss the roles played by the level of temptation to refrain from quarantine and the degree of social compassion in reducing the risk of spread and controlling the outbreak. We find that a low level of temptation or a high degree of social compassion not only can lead to the amelioration of the disease impact to some extent but also can result in a complete containment of the epidemic. By shedding light on the intricate interplay between disease spreading and voluntary quarantine, our results provide new insights into the fundamental social dilemma aspect of disease control through nonpharmaceutical interventions including voluntary quarantine.
infectious disease, voluntary quarantine, adaptive learning, social compassion
§ INTRODUCTION
As a critical measure in public health, quarantine plays a significant role in preventing the spread of infectious diseases and mitigating their impact on society <cit.>, <cit.>. In the wake of the pervasive spread of the COVID-19 pandemic, disparate regions across the globe have ushered in a diverse array of quarantine policies <cit.>. These policies encapsulate a multifaceted tapestry of containment strategies, ranging from stringent lockdown measures that entail the seclusion of entire populations within the confines of their homes <cit.>, to targeted quarantines that aim to isolate specific individuals or groups suspected of exposure <cit.>, <cit.>. However, the imposition of these restrictive measures has sometimes fueled public anger, as individuals grapple with the perceived infringement upon their rights and personal freedoms. Moreover, the profound disruption of daily routines, isolation from loved ones, and uncertainty about the future can exacerbate feelings of anxiety, stress, and despair <cit.>. In extreme cases, individuals may experience heightened vulnerability to mental health challenges <cit.>, including an increased risk of suicidal tendencies <cit.>.
Top-down interventions as such for infectious diseases are imposed by authorities and enforce compliance. In contrast, bottom-up interventions rely on personal choice and responsibility, empowering individuals to take proactive steps to protect themselves and others. While susceptible individuals are prone to quarantine themselves out of fear of getting infected, infected individuals may also choose to sequester themselves to avert the potential transmission of the disease to others. Compared with mandatory quarantine, voluntary quarantine allows for more adaptability to diverse contexts and hence the potential for sustained behavioral changes beyond immediate interventions. Based on this premise, people exhibit a greater inclination to allow their time, energy, and resources toward endeavors that foster physical and mental well-being, disease prevention, as well as public cooperation. The corresponding human-disease interaction is often studied from the lens of coupled dynamics of epidemic spreading and adaptive decisions <cit.>, <cit.>.
Among all the factors that influence what decisions an individual makes for their health behavior, economic status plays a key role. Moreover, the economic ramifications of quarantine warrant significant attention <cit.>, <cit.>. As most previous studies have looked merely at the economic losses of isolation measures to different groups of people in society <cit.>, the economic effects on specific individuals need to be further addressed to gain a comprehensive understanding of the feedback between infection and voluntary intervention. When individuals, whether healthy or infected, choose to quarantine themselves, they can minimize the risk of contracting or spreading the disease. This proactive decision not only helps to minimize the transmission of the epidemic but also mitigates the psychological burden borne by infected individuals. On the other hand, the choice to quarantine oneself entails a necessary disconnection of economic contact with one’s neighbors, resulting in an inevitable loss during the quarantine period.
During an epidemic, self-interested individuals offered the option of voluntary quarantine are going to carefully weigh the health risks and economic losses and make adaptive choices between quarantine or non-quarantine. The size of the epidemic will determine the infection risks and thus influence the individual quarantine decision. The collective quarantine level, the other round, will depend on the severity of the transmission, better or worse. To shed light on this dilemma, we consider compartmental models of infectious diseases and perform agent-based simulations in populations with changeable spatial structures. By comparing the payoff and cost during a possible quarantine period, an individual can make and adjust their choice of whether to quarantine or not. We also discuss how different levels of temptation and degrees of social compassion can influence the possible dynamics of this human-disease system.
§ MODEL & METHODS
Our study describes the coupled dynamics of disease spreading and voluntary quarantine in a well-mixed population. Each individual is tagged by both their health status and social status. The former is decided by the specific disease (susceptible, infected, recovered, to name a few) and the latter is dependent on the quarantine decision (quarantine or non-quarantine). Based on a given compartmental model, we perform agent-based simulations to investigate the intricate interplay between biological contagion and adaptive learning and further, the roles played by different factors in the trajectory of the outbreak. Our framework can be extended to accommodate populations characterized by real network structures and intervention measures apart from quarantine. Its flexibility facilitates a comprehensive understanding of the social dilemma of voluntary pharmaceutical or non-pharmaceutical interventions, a so-called tragedy of the commons in the context of public health events.
§ MATHEMATICAL MODEL
§.§ epidemiology
To quantify the spread of the infectious disease, we use the susceptible-infected–susceptible (SIS) model <cit.>. The population is assigned to two compartments, susceptible (S) and infected (I), and individuals may progress between them. The SIS model can be mathematically represented by the following set of ordinary differential equations (ODEs):
dS/dt = - β S I + γ I,
dI/dt = β S I - γ I.
Here, the two variables S and I stand for the fractions of susceptible and infected individuals in each compartment. Besides, the two parameters β and γ are the transmission rate and the recovery rate of infected individuals.
§.§ Learning dynamics
Under the policy of voluntary quarantine, individuals can make or reassess their quarantine decisions during the epidemic out of self-interest. They are allowed to update their strategies by carefully considering the potential payoff and cost during a quarantine period. For a focal individual, their payoff is contingent upon the social interactions with the neighbor whereas their cost is related to not only the health status of the neighbors but also that of themselves. More specifically, if an individual opts not to undergo quarantine, their payoff arises from engaging in economic activities with their neighbors. Meanwhile, carrying out economic activities incurs certain potential costs, including the fear of contracting the disease and potentially infecting others.
From an analytical perspective, we adopt a benchmark of zero as the aspiration level, whereby a positive net payoff indicates a decision of non-quarantine. Conversely, a negative net payoff suggests a need for quarantine. Let f denote the probability of quarantine. For an individual with perfect rationality, we have
f = {
0, B - C > 0
1. B - C < 0
.
Here, B and C are the payoff and cost of the individual at the moment. For an individual with imperfect rationality (a scenario we will dive into later), we use the Fermi function to calculate the probability:
f(K) = 1/1 + exp[ (B-C)/K ].
The parameter K is known as the strength of selection, which determines the degree of stochasticity in the evolutionary process. As the value of K approaches zero, the Fermi function approaches either zero or one, indicating that the agent will reach a stable state with one strategy dominating over the other. And (3) will degenerate into (2). On the other hand, as the value of K increases towards infinity, the Fermi function approaches 1/2, indicating that the agent will become more random and their decision between the two strategies is equivalent to flipping a coin.
In general, individuals tend to prefer the option with a higher total payoff and avoid that with a higher total cost in deciding whether to quarantine or not. In other words, they are less likely to quarantine when the total payoff exceeds the total cost, and vice versa. It is worth pointing out we do not introduce social contagion, namely, imitations between individuals in the current work.
§ AGENT-BASED SIMULATIONS
We perform agent-based simulations in a population consisting of N individuals. For the time being, we work on a complete graph, which can be further extended to any given network. To initialize the disease spreading, we randomly select a fraction ε of the population to be the patient zeros. In each iteration, an individual is randomly picked: with probability p, their health status will be updated, and with probability 1-p, their social status instead.
To renew the health status, we now consider a discrete SIS model. For a given susceptible individual, the probability of being infected is β = β k_i,nq/k, where k_i,nq is the number of infected neighbors who are not undergoing quarantine and k is the total number of neighbors. We let β, γ∈ (0,1) in our simulation.
To renew the social status, we need to figure out the expected payoff and cost of a given individual. The total payoff of an individual who is not quarantined is B = b k_nq, where b is the payoff of the interaction with one neighbor during a quarantine period and k_nq is the number of neighbors who are not quarantined. Meanwhile, the total cost of an individual is related to their current health status. If the focal individual is susceptible, the cost will be the risk of being infected. However, if the focal individual is already infected, the cost will be the psychological burden of potentially infecting other susceptible neighbors. Accordingly, the total cost of a susceptible individual who is not quarantined is C = c k_i,nq, where c is the cost of being infected. And the total cost of an infected individual who is not quarantined is C = r c k_s,nq, where k_s,nq is the number of susceptible neighbors who are not undergoing quarantine. In particular, we introduce the parameter r to represent the degree of social compassion. The larger r is, the more an individual prioritizes the welfare of others.
More detailed illustrations are given in Algorithm <ref> and the notations used throughout the paper are listed in Table <ref>. Without loss of generality, we assume that the total time T and the quarantine period D are both discrete time steps and share the same unit, based on the specific disease under consideration. Moreover, for ease of presentation and calculation, we set the cost of being infected c = 1. We conduct the same simulation multiple times (400 repetitions) to reduce errors.
§ RESULTS
§.§ voluntary quarantine as an effective intervention measure
If the population is allowed to embrace quarantine, the size of the epidemic will be reduced. An example is given in Figure <ref> (a), where the fraction of infected individuals encounters a significant decrease of 36.84% at the stationary state. The substantial decline demonstrates voluntary quarantine as an effective intervention measure in impeding the rapid spread of the epidemic. As shown in figure <ref> (b), the level of quarantine will increase until it reaches an equilibrium, up to 0.823 in this example, similar to a skewed logistic growth <cit.>. The great number of individuals opting for quarantine indicates a profound recognition of prioritizing their well-being over financial gain. Nevertheless, it is inevitable that a few individuals may be more concerned about their personal benefit.
Intriguingly, the strategies of susceptible and infected individuals on whether to quarantine or not can exhibit notable divergence. The ratio of quarantined susceptible individuals S_q to non-quarantined susceptible individuals S_nq surpasses one, whereas the converse holds true for infected individuals I_n and I_nq. That is, S_q/S_nq>1 and I_nq/I_q>1. This striking disparity is vividly illustrated in Figure <ref> (c), where the values of S_q/S_nq and I_nq/I_q are around 7.69 and 14.99, respectively.
The motivations behind the choices made by different individuals can be attributed to their health status. In the case of susceptible individuals, safeguarding their own well-being takes precedence as they are willing to adhere to quarantine measures. Their decision is driven by the fear of being infected. Conversely, infected individuals may opt not to undergo quarantine as they are already in the condition perceiving the minimal risk. Their decision stems from the temptation of participating in economic activities and getting paid. It is worth emphasizing that the degree of social compassion matters in fostering the awareness of quarantine for infected individuals. In Figure <ref>, the corresponding parameter r = 0.1, indicates a lack of concern towards others. We will further discuss the role of r in mitigating the spread of the epidemic in a moment.
§.§ level of temptation
By augmenting the economic incentives induced by social activities, the trajectory of the outbreak can be a non-monotonic curve. As the value of the payoff b increases, it may happen that the proportion of infected individuals first experiences a rise, followed by a gradual fall until it stabilizes at equilibrium. Additionally, it can be told from the example in Figure <ref> (a) that the peak of the epidemic coincides with the time point when the growth rate of quarantine reaches its maximal level and about half of the population is under quarantine. We offer a possible explanation for this interesting phenomenon from the perspective of a time lag. At the initial phase of the epidemic, people gradually recognize the significance of quarantine and decide to take action as the number of patients grows. But their collective efforts will not take effect until a decent fraction of individuals are quarantined.
Moreover, if the economic incentives are even more alluring, that is, b is even greater, the proportion of individuals under quarantine will decrease and the fraction of infected individuals will increase at the stationary state. As depicted in Figure <ref> (b), the limit of infected individuals I(∞) monotonically increases with respect to the payoff b whereas that of quarantined individuals q(∞) first increases and then decreases. Given that the earnings from engaging in economic activities can overshadow the concerns related to health risks both for oneself and others, an offer one cannot refuse will lead to a decline in adherence to quarantine measures. Consequently, the epidemic will propagate more rapidly among the population, potentially resulting in a more severe outbreak. The extreme case will resemble the null case in Figure <ref> (a), where the option of voluntary quarantine is unavailable in the first place.
§.§ degree of social compassion
Apart from the level of temptation b, the degree of social compassion r is also a nontrivial factor in the learning dynamics of voluntary quarantine. As the value of r increases, infected individuals develop more empathy for the potential harm they may cause to others. Therefore, an increasing fraction of infected individuals choose to quarantine themselves and prevent the transmission of the disease to their vulnerable peers, no matter if these susceptible individuals are isolated or not. Compared with Figure <ref> (c), we can see that the ratios of quarantined individuals to non-quarantined individuals are reversed now for susceptible and infected groups in Figure <ref> (a). This observation highlights the significance of indirect protection in curbing the spread of the epidemic. If a large proportion of infected individuals decide to undergo quarantine, the chain of transmission can be broken and even susceptible individuals who forgo quarantine can be protected.
It follows that there actually exists a critical phase transition of disease spreading. As r grows and reaches a certain threshold, we will witness a plunge in both the fraction of infected individuals and that of individuals opting for quarantine. More detailed illustrations are given in Figure <ref> (a). Again, the collective efforts of infected individuals choosing to quarantine themselves play a crucial role in mitigating the transmission of the disease. Through their altruistic behavior, infected individuals contribute to the progressive amelioration and possible elimination of the epidemic. As such, a relatively high degree of social compassion (for example, r = 0.2 in our case) shares a similar effect to targeted isolation measures imposed on infected individuals, the former being voluntary while the latter is mandatory.
§ DISCUSSION & CONCLUSION
Our work highlights the crucial role of voluntary quarantine in mitigating the spread of an epidemic and safeguarding public health. The agent-based simulations show that a noticeable pattern can emerge throughout the evolutionary process where self-interested individuals keep adjusting their quarantine strategies in accordance with a potential payoff-cost analysis. If susceptible individuals coexist with infected individuals in the population, the former will show a stronger inclination to undergo quarantine while the latter, conversely, will exhibit a greater tendency to disregard such measures.
We also find that the level of temptation to refrain from quarantine and the degree of social compassion are two key factors in shaping the trajectory of the disease. If the income from economic activities is negligible, individuals will prioritize their well-being and choose self-quarantine in the presence of infected neighbors. The disease will be eradicated, rendering isolation unnecessary in the end. If the income is irresistible, on the other hand, individuals will be desperate to take the risk and interact with one another. The disease will prevail, and the option of quarantine will exist in name only. Furthermore, if the economic benefit is somewhere in between, the size of the epidemic will first increase and then decrease. The corresponding peak will not appear until a significant proportion of the population is under quarantine. That is, there may exist a delay between disease spreading and adaptive learning of health behavior.
As to the degree of social compassion, it is in line with our intuition that susceptible individuals are more motivated to quarantine themselves compared with infected individuals. The story can be reversed for increased social empathy toward vulnerable people in the population given that infected individuals will bear a heavier psychological burden and feel more responsible for the well-being of their peers. To avoid experiencing guilt or anxiety if getting others infected, these patients opting for isolation will actually curb the transmission of the disease. Henceforth, a critical phase transition from disease prevalence to its absence can take place as the extent of social empathy grows. Likewise, the fact that no one is infected will obviate the need for quarantine among the population at last.
To sum up, our study underscores the significant interplay between disease spreading and voluntary quarantine. Based on the current work, a series of intriguing problems awaits exploration. In our agent-based simulations, we have focused on individuals with perfect rationality. A more general assumption of imperfect rationality will enhance our comprehension of the coupled dynamics by introducing randomness to the process of decision-making. Individuals with bounded rationality may exhibit biases or heuristics or hold incomplete information as they weigh the potential payoff and cost, which can lead to fascinating and unexpected outcomes. Apart from the SIS model involving two different compartments in a constant population, we can also consider more complex models with more compartments (for example, the SIR model) or migrations. Additionally, nontrivial spatial structures such as small-world and scale-free networks allow for a more realistic representation of the social system, unveiling the heterogeneity of disease transmission, interaction, and imitation (if possible) between individuals. Our framework, initially developed as a complete graph, can be readily expanded to accommodate any desired network structure. By integrating these aspects into our research, we will provide more insights into the fundamental social dilemma aspect of disease control through nonpharmaceutical interventions including but not limited to voluntary quarantine.
unsrt
|
http://arxiv.org/abs/2307.04042v1 | 20230708202414 | Sup-Norm Convergence of Deep Neural Network Estimator for Nonparametric Regression by Adversarial Training | [
"Masaaki Imaizumi"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
We show the sup-norm convergence of deep neural network estimators with a novel adversarial training scheme. For the nonparametric regression problem, it has been shown that an estimator using deep neural networks can achieve better performances in the sense of the L2-norm. In contrast, it is difficult for the neural estimator with least-squares to achieve the sup-norm convergence, due to the deep structure of neural network models. In this study, we develop an adversarial training scheme and investigate the sup-norm convergence of deep neural network estimators. First, we find that ordinary adversarial training makes neural estimators inconsistent. Second, we show that a deep neural network estimator achieves the optimal rate in the sup-norm sense by the proposed adversarial training with correction. We extend our adversarial training to general setups of a loss function and a data-generating function. Our experiments support the theoretical findings.
§ INTRODUCTION
We study the nonparametric regression problem.
Suppose we observe (X_1,Y_1),...,(X_n,Y_n) ∈ [0,1]^d × with dimension d ∈ that are independent and identical copies of a [0,1]^d ×-valued random element (X,Y) which follows the following regression model:
Y = f^*(X) + ξ,
where f^*: [0,1]^d → is an unknown function, ξ is a random noise variable with zero mean and finite variance and is independent to X, and X follows a marginal measure P_X on [0,1]^d.
Our interest is to utilize a deep neural network model and develop an estimator f̂ from the model and the n observations, then study its estimation risk in terms of the sup-norm, referred to as an L^∞-risk:
sup_x ∈ [0,1]^d |f̂(x) - f^*(x)|,
which implies uniform convergence of the estimator.
In this study, we prove that an adversarial training framework can provide an estimator with deep neural networks whose L^∞-risk converges, then derive a convergence rate of the risk and show the minimax optimality of the rate.
§.§ Background and Question
Deep learning is a data-driven statistical method using deep neural network models <cit.>, which have multiple layers.
It has many well-known extensions, such as a deep convolutional network <cit.>, a residual network <cit.>, and an attention mechanism <cit.>.
Owing to the multiple layers and the well-designed training algorithm, deep learning has achieved quite accurate prediction performance in various tasks.
The framework of nonparametric regression has been actively used to analyze deep neural networks, and many roles of deep learning have been revealed.
A deep neural network is a model of functions f:[0,1]^d → with multiple layers such that
f(x) = g_L ∘ g_L-1∘⋯∘ g_1(x),
where g_1(·),...,g_L(·) are trainable functions by L layers.
Deep learning is a method of fitting the function by deep neural networks to observed data, hence it is obviously regarded as a method for the nonparametric regression problem.
Specifically, in most studies on the nonparametric regression with deep neural networks, the following least-square estimator has been studied:
f̂^LS∈_f ∈1/n∑_i=1^n (Y_i - f(X_i))^2,
where is a set of functions by deep neural networks with the form (<ref>).
Further, performance of the estimator f̂^LS has been studied by its L^2-risk
f̂^LS - f^*_L^2^2 := [ (f̂^LS(X) - f^*(X))^2 ].
Using this framework, seminal works <cit.> show that the multilayer structure of deep neural networks fits an internal structure of the unknown function f^* and that its estimation error achieves a faster convergence.
<cit.> investigate statistical properties of the neural estimators such as asymptotic distribution and robustness.
<cit.> show that the multilayer structure of the neural estimator is effective when the target function f^* has irregular properties such as discontinuity and heterogeneous smoothness.
<cit.> shows an adaptive property of the neural estimators to an intrinsic low-dimensionality of the observations, e.g., data concentrates on a low-dimensional manifold in its domain.
Studying a sup-norm value of the estimation error has been an important interest in nonparametric regression problems.
The sup-norm value, referred to as an L^∞-risk, is a sharper measure of accuracy and sensitivity of estimators than the L^2-risk.
Furthermore, the sup-norm convergence of errors is useful for statistical inference, such as a uniform confidence band, and is effective in the case with covariate shift of the transfer learning <cit.>.
For several conventional (non-deep) nonparametric estimators for f^*, their sup-norm convergence has been actively studied.
Classically, the convergence of kernel methods <cit.> and series methods <cit.> have been investigated.
More recently, the convergence of wavelet methods <cit.>, methods with reproducing kernel Hilbert spaces <cit.>, and Gaussian process methods <cit.> have been clarified.
Roughly speaking, when studying the sup-norm convergence of these non-deep estimators f̂^ND, the following linear-in-basis form plays an effective role:
f̂^ND = ∑_j ∈ J w_j ψ_j(·),
where J is an index set, {w_j}_j ∈ J is a set of weights in trained by the least-square approach, and {ψ_j(·)}_j ∈ J is a family of basis functions (possibly depending on covariates) such as wavelets or kernels.
Since the non-deep estimators have the linear form, it is possible to control the L^∞-risk effectively and show its convergence, except a general result by <cit.>.
Our interest is to evaluate the L^∞-risk of an estimator using deep neural networks (<ref>).
Since the deep neural network model (<ref>) does not have the linear-in-basis form (<ref>) as the non-deep methods, the existing analysis cannot study the L^∞-risk of deep neural networks.
Based on the background, we have the following questions:
Is it possible to achieve an estimator by deep neural networks f^* whose L^∞-risk converges?
If so, is it possible to show the optimality of a convergence rate of the L^∞-risk?
§.§ Introduction to Adversarial Training
The adversarial training is a training scheme for deep neural networks, which has been developed to deal with an adversarial attack on prediction by neural networks.
An adversarial attack is a methodology to mislead deep neural networks in its predictions, by putting a tiny perturbation into a covariate for a trained deep neural network.
Since functions by trained deep neural networks are unstable, the perturbed samples, called adversarial samples, vary the outputs of deep neural networks drastically.
<cit.> reported that the phenomenon by introducing a case in which a deep neural network misclassified an image of a panda as an image of gibbons by adding very fine noise to the image.
After the finding, many adversarial attack methods have been developed <cit.>, threatening the robustness of neural networks.
A standard approach to adversarial training is to minimize a robustified empirical risk, which is measured by adding perturbations to the observed input variable <cit.>.
Rigorously, an estimator by the adversarial training for regression is defined as the minimizer of the following empirical risk:
min_f ∈1/n∑_i=1^n max_x' : x' - X_i_∞≤ h (Y_i-f(x'))^2,
with some h > 0.
The outer minimization is solved by the gradient descent method as well as the usual least-square loss, and the inner maximization is solved by a gradient ascent method.
Several efficient algorithms have been proposed to solve this problem effectively <cit.>, such as the fast gradient sign method <cit.>.
The optimization process is summarized in the following:
i. Initialize f ∈ and repeat the following steps ii and iii:
ii. For each (Y_i,X_i), find x^*_i = _x' ∈{x: x-X_i_∞≤ h} (Y_i - f(x'))^2.
iii. Update function f ← f - η∇ ( n^-1∑_i=1^n (Y_i - f(x^*_i))^2),
where η > 0 is a learning rate and ∇ denotes a derivative with respect to neural network parameters of f.
Note that the efficiency of the algorithm is not a primary interest of this study, hence we focus on the estimation error by the global minimizer of the adversarial risk.
Several works actively pursue a theoretical understanding of adversarial training.
One of the most significant issues is a trade-off between the robustness and accuracy of the adversarial training, which studies the possibility of balancing the predictive performance of deep neural networks with their ability to defend against adversarial samples.
A risk bound and the sample complexity of the adversarial training in general settings is widely examined <cit.>.
The predictive performance of the adversarial training has been also studied, particularly in linear regression models with over-parameterization <cit.>.
§.§ This Study
The purpose of this study is to investigate the sup-norm convergence of an error by deep neural networks using the adversarial training scheme.
For this aim, we develop a novel formulation of adversarial training and study its efficiency.
Specifically, our formulation includes a preprocessing for smoothing the output variable at the first step, then formulates a neural estimator as a minimizer of an empirical adversarial risk associated with the preprocessing.
The preprocessing has a role to reduce a bias on the estimator from the perturbation of the adversarial training scheme.
As a specific form of preprocessing, we can employ several nonparametric estimators including the nearest neighbor method and the kernel method.
As a result, we derive an upper bound on the L^∞-risk of the estimator with deep neural networks using our adversarial training scheme, then reveal some properties of its convergence rate.
Specifically, our contributions are summarized as follows.
(i) We derive a convergence rate of the L^∞-risk of the estimator when the true function f^* belongs to the Hölder space.
The derived rate achieves the minimax optimal rate with an appropriately designed preprocessing.
(ii) We show the inconsistency of the ordinary adversarial training without preprocessing.
This is due to the inability of an output variable in the regression problem to accommodate perturbations of the adversarial training.
(iii) Our approach applies to not only the adversarial training with a squared loss but also a general convex loss.
Specifically, we study an L^∞-risk of the regression problem of general loss, which is useful for handling data that have heavy-tailed noise.
(iv) We additionally study the L^∞-risk when the true function f^* has a heterogeneous smoothness, i.e. it belongs to the Besov space.
Our analysis shows the minimax optimality of the convergence rate of the L^∞-risk in this case.
(v) Our result is applicable to a wide range of architectures of deep neural networks, such as a fully-connected dense layer.
Also, it allows both finite depth networks and finite width networks.
We conduct numerical experiments and confirm that our theoretical results are consistent with the result.
Our results provide new implications for the understanding of adversarial training, which argues the trade-off between robustness and accuracy of prediction by adversarial training.
Along with this line, we show that (i) the ordinary adversarial learning is not consistent in the regression problem in the first place, (ii) the robustness obtained by adversarial learning is described by sup-norm convergence of the estimation error, and (iii) the adversarial training achieve the optimal rate with appropriate preprocessing.
Technical contributions in our proof are summarized as follows.
First, we derive an upper bound of the sup-norm of an estimation error by the adversarial risk up to constants.
This bound uses a volume of a neighborhood set of an input variable, which is utilized to design the adversarial perturbation.
Second, we develop an empirical process technique for the evaluation of preprocessing.
To control the effects of the preprocessing and the adversarial training simultaneously, we involve two levels of evaluation of biases and variances as appropriate.
§.§ Organization
The rest of this paper is organized as follows.
Section <ref> gives a setup for the nonparametric regression problem and the definition of deep neural networks.
Section <ref> gives a general formulation of adversarial training and an overview of analysis on it.
Furthermore, the section shows that naive adversarial training does not give a consistent estimator.
In Section <ref>, as a main result, we derive an upper bound by a sup-norm of an estimation error by the developed estimator
Section <ref> gives extensions and applications.
Section <ref> gives numerical simulations, and Section <ref> concludes.
§.§ Notation
For n ∈, [n] := {1,2,...,n} is a set of natural numbers no more than n.
For a,a' ∈, a ∨ a' := max{a,a'} is the maximum.
⌊ a ⌋ denotes the largest integer which is no more than a.
The Euclidean norm of a vector b ∈^d is denoted by b_2 := √(b^⊤ b).
Let C_w be a positive finite constant depending on a variable w.
{E} denotes the indicator function. It is 1 if the event E holds and 0 otherwise.
For a matrix A ∈^N × N, A_i,j denotes an (i,j)-th element of A for i,j=1,...,N.
For a measurable function f: Ω→ on a set Ω⊂^d, f_L^p(μ) := (∫ |f(x)|^p dμ(x) )^1/p denotes an L^p-norm for p ∈ [1,∞) with a measure μ, and f_L^∞ := sup_x ∈Ω|f(x)| denotes a sup-norm.
Also, L^p(Ω) denotes a set of measurable functions such that f_L^p(λ) < ∞ with the Lebesgue measure λ.
For x ∈^d, δ_x denotes the Dirac measure at x.
For a function f : ^d → with a multi-variate input (x_1,...,x_d) ∈^d and a multi-index a = (a_1,...,a_d) ∈^d, ∂^a f(x_1,...,x_d) := ∂_x_1^a_1∂_x_2^a_2⋯∂_x_d^a_d f(x_1,...,x_d) denotes a partial derivative with the multi-index.
For a variable x, C_x denotes some positive finite constant that polynomially depends on x, and it can have different values in different places.
For sequences of reals {a_n}_n ∈ and {b_n}_n ∈, a_n ≍ b_n denotes lim_n →∞ a_n/b_n → c with some c ∈ (0,∞), a_n = O(b_n) denotes |a_n| ≤ M|b_n| and a_n = Ω (b_n) denotes |a_n| ≥ M |b_n| with some M > 0 for all sufficiently large n. a_n = o(b_n) denotes |a_n| ≤ M |b_n| for any M > 0 and for all sufficiently large n.
O(·) and Ω(·) are the notations O(·) and Ω(·) ignoring multiplied polynomials of log(n), respectivelly.
For a sequence of random variables {X_n}_n ∈, X_n = O_P(a_n) denotes Pr(|X_n/a_n| > M) ≤ε for any ε > 0 and some M>0 for all sufficiently large n, and X_n = o_P(a_n) denotes lim_n →∞Pr(|X_n/a_n| > ε) = 0 for any ε > 0.
§ PROBLEM SETTING AND PRELIMINARIES
§.§ Nonparametric Regression and L^∞-Risk
§.§.§ Model and Observations
For the nonparametric regression, suppose that we have n observations (X_1,Y_1),...,(X_n,Y_n) ∈ [0,1]^d × that are independent and identical copies of a random variable (X,Y) which follows the regression model (<ref>).
Note that the model is characterized by the unknown function f^* and the noise variable ξ.
Let P_X be a marginal measure of X.
§.§.§ Basic Assumption
We introduce a standard assumption on the regression model.
P_X has a density function that is uniformly lower bounded by C_P_X > 0 on [0,1]^d.
Assumption <ref> is important to estimate f^* on the entire domain [0,1]^d.
Both of the assumptions are commonly introduced in the nonparametric regression for neural networks <cit.>.
We suppose that f^* belongs to a function class with the Hölder smoothness with an index β > 0.
To the end, we define a ball of the Hölder space with β > 0 as
^β([0,1]^d) := { f: [0,1]^d →|
∑_b ∈^d: b_1 < ⌊β⌋∂^b f_L^∞ + ∑_b ∈^d: b_1 = ⌊β⌋sup_x,x' ∈ [0,1]^d, x ≠ x'|∂^b f(x) - ∂^b f(x')|/x - x'_∞^β - ⌊β⌋≤ B},
with its radius B ≥ 1.
Intuitively, ^β([0,1]^d) is a set of functions on [0,1]^d that are ⌊β⌋ times partially differentiable and their derivatives are (β - ⌊β⌋)-Hölder continuous.
There exists β > 0 such that f^* ∈^β'([0,1]^d) holds for all β' ∈ (0,β].
To impose differentiability for f^* is the usual setting for nonparametric regression (see <cit.>, for example).
Further, in the statistical studies on deep neural networks, it has also studied the estimation of functions with more complex structures <cit.>.
We will discuss an extension on this assumption in Section <ref>.
§.§.§ Goal: Sup-norm Convergence
Our goal is to estimate the true function f^* in the model (<ref>) and study an estimation error of an estimator in terms of the sup-norm ·_L^∞.
Rigorously, we will develop an estimator f̂ and study its L^∞-risk defined as follows:
f̂ - f^*_L^∞ := sup_x ∈ [0,1]^d |f̂(x) - f^*(x)|.
The L^∞-risk is a sharp measure for the robustness of estimators and is applied to statistical inference such as a uniform confidence band.
To understand this point, we discuss its relation to the commonly used L^2-risk measured by the L^2-norm, which is a typical case with the following L^p-norm (p ∈ [1,∞)) with p=2:
f̂ - f^*_L^p(P_X)^p := _X[ |f̂(X) - f^*(X)|^p ].
Since the L^∞-risk bounds the L^p-risk, i.e. f̂ - f^*_L^∞≥f̂ - f^*_L^p(P_X) holds for every p ≥ 1, the L^∞-risk leads stronger convergence.
Figure <ref> illustrates the difference between the convergences in the L^2-norm and the sup-norm.
In the related studies with neural networks (e.g. <cit.>), the L^2-risk has been mainly studied, but the L^∞-risk of neural network estimators has not been proved to converge.
§.§ Deep Neural Network Model
We define a deep neural network, which is a model of functions by multiple layers.
Specifically, we consider deep neural networks with fully-connected layers and the rectified linear unit (ReLU) activation function, which is one of the most commonly used activations.
Let L ∈ be a number of layers, and = (W_1,...,W_L+1) ∈^L+1 be a tuple of width parameters, where W_ℓ denotes width of an ℓ-th layer.
Deep neural networks have a weight matrix A_ℓ∈^W_ℓ + 1× W_ℓ and a weight vector b_ℓ∈^W_ℓ for each ℓ∈ [L].
For each d ∈, we introduce a ReLU activation function σ:^d →^d such that σ(z) = ((z_1 ∨ 0), (z_2 ∨ 0),...,(z_d ∨ 0))^⊤ for z = (z_1,...,z_d) ∈^d.
For each ℓ∈ [L-1], we define a map g_ℓ: ^W_ℓ→^W_ℓ+1 by an ℓ-th layer as
g_ℓ(z) = σ(A_ℓ z + b_ℓ), z ∈^W_ℓ.
For the last L-th layer, we define g_L(z) = A_L z + b_L with z ∈^W_L.
For L and , we define a parameter space Θ_L, := (^W_2× W_1×^W_1) × (^W_3× W_2×^W_2) ×⋯× (^W_L+1× W_L×^W_L) whose elements is θ = ((A_1,b_1),(A_2,b_2),...,(A_L,b_L)), then we define a function g :^d → by a deep neural network with d = W_1 and W_L+1 = 1 as
f_θ(x) = g_L ∘ g_L-1∘⋯∘ g_1(x), x ∈ [0,1]^d.
Intuitively, f_θ(x) is constituted by compositions of L maps by the multiple layers with the maximum width _∞ = max_ℓ∈ [L+1] W_ℓ.
There are at most ∑_ℓ=1^L (W_ℓ + 1) W_ℓ+1≤ L (_∞ +1)^2 parameters in the deep neural network model.
We introduce a set of functions by deep neural networks with L layers and W maximum width.
With a tuple (L, W) ∈^2 and an upper bound B ≥ 1, we define the set of functions by deep neural networks as
(L,W):= { f_θ|f_θ_L^∞≤ B , θ∈Θ_L,, _∞≤ W }.
The condition on the upper bound B can be satisfied by a clipping operation using the ReLU activation function <cit.>.
This definition of deep neural networks includes several variations of neural networks.
If the parameter matrix A_ℓ is not sparse, the defined neural network is a fully-connected neural network.
If the matrix A_ℓ is constrained to be sparse with some structure, it is equivalent to a convolutional neural network <cit.> or a residual network <cit.>.
One advantage of the definition (<ref>) is that it controls the easily manipulated values of width W and depth L of neural networks, that can be easily specified when designing neural network models.
This is in contrast to manipulating the number of nonzero parameters and the maximum parameter value, which are difficult to control in practice (for example, see <cit.>).
§ ADVERSARIAL TRAINING ESTIMATOR FOR REGRESSION
§.§ Ordinary Adversarial Training and its Inconsistency
We introduce a framework of adversarial training.
The adversarial training framework defines its loss using an input point in the neighborhood of a data point that maximizes loss, as reviewed in (<ref>).
Rigorously, with a scale multipliers h ∈ ( h,1) with h >0, we consider a neighbourhood of x ∈ [0,1]^d as
Δ_h^p(x) = {x' ∈ [0,1]^d |x - x'_p ≤ h}⊂ [0,1]^d.
Then, we consider the following estimator by the empirical adversarial risk with a function f: [0,1]^d → and p ≥ 1:
R_n^o(f) := 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i) (Y_i - f(x'))^2.
We can define an estimator of f^* by the minimizer of this empirical adversarial risk as
f := _f ∈(L,W) R_n^o(f).
The minimax optimization in the problem (<ref>) is solved by various algorithms <cit.>.
§.§.§ Inconsistency of Ordinary Adversarial Training
In this section, we show the inconsistency of f̃ by ordinary adversarial training.
Specifically, we obtain the following result.
Suppose n ≥ 3.
There exists a sub-Gaussian noise ξ_i, f^* ∈^1([0,1]^d), P_X, and h ∈ (0,1) such that the estimator f̌ in (<ref>) satisfies the following inequality with an existing constant c^* > 0 with probability at least 0.5:
f̌ - f^*_L^2(P_X)^2 ≥ c^*.
This result shows that the L^∞-risk of f̌ does converge to zero with the ordinary adversarial training, regardless of the sample size n and a neural network architecture.
Since the L^∞-risk is bounded below by the L^2-risk, hence the ordinary adversarial training also yields an inconsistent estimator in the sense of a sup-norm.
This result is not limited to the choice of model used for the estimator, hence it occurs with methods other than neural networks.
Intuitively, ordinary adversarial training produces a bias by the design of perturbations on inputs (see the middle panel of Figure <ref>).
This is because the perturbation makes f̌(X_i) fit to an output with a shift ς = x' - X_i, which creates the inconsistency.
Hence, we need to correct the bias by the ordinary adversarial training in the regression problem.
§.§ Proposed Framework of Adversarial Training
We introduce an empirical risk function for adversarial training based on a quadratic loss.
We develop a random map Ŷ: [0,1]^d → for surrogate outputs, which referred to a preprocessed output.
This notion is a general expression of several methods, and its specific configurations will be given later.
With Ŷ, we define an empirical preprocessed adversarial risk as
R_n(f) := 1/n∑_i=1^nsup_x' ∈Δ_h^p(X_i) (Ŷ(x') - f(x'))^2,
for a function f ∈ L^2([0,1]^d).
This loss function is a generalized version of the ordinary adversarial risk (<ref>) with the preprocessing Ŷ.
Using this notion, we define an estimator as the minimizer of the empirical risk as
f̂∈_f ∈(L,W) R_n(f).
This framework intends to perturb an output variable in response to the perturbation on the input X_i.
That is, when the input point X_i is shifted by ς = x' - X_i due to the adversarial training, we also shift the output side by ς.
Hence, the observed outputs may not be able to accommodate the shift.
To address this issue, we prepare the corresponding output using a preprocessing approach, such as the nearest neighbor method.
Figure <ref> illustrates differences between the least square estimator f̂^LS, the ordinary adversarial training f̌, and our proposal estimator by the adversarial training with preprocessing f̂.
§.§.§ Preprocessing Design
We impose the following assumptions on the preprocessing.
[Preprocessing]
Ŷ(x) is continuous and [Ŷ_L^∞^2] ≤ V^2 with some V > 0.
Also, there exists a non-negative sequence {ζ_n}_n ∈ such that ζ_n → 0 as n →∞ such that the following holds for all n ∈:
ζ_n^2 ≥[ Ŷ - f^*_L^∞^2 ].
The sequence {ζ_n}_n ∈ represents a convergence rate of the preprocessing Ŷ to f^*.
Importantly, the data used to construct the preprocessed output Ŷ here may overlap the data for the estimator as (<ref>).
There are several examples for preprocessing as follows.
[Nearest neighbour]
First, we consider the k-nearest neighbor method.
For k ∈ and x ∈ [0,1]^d,
we define a radius B_x(r) := {x' ∈ [0,1]^d |x-x'_2 ≤ r} with r>0, the k-nearest neighbour radius r_k(x) := inf{r >0 | |B_x(r) ∩| ≥ k}, and its corresponding dataset N_k(x) := B_x(r) ∩.
With this notion, we define the k-
nearest neighbor preprocessing.
Ŷ(x) = 1/|N_k(x)|∑_i=1^n Y_i {X_i ∈ N_k(x)}
In this example, if Assumption <ref> holds with β∈ (0,1], we have ζ_n^2 = O(n^-2β/(2β + d)log n) with k ≍ n^2β/(2β + d) by Theorem 1 in <cit.>.
[Posterior mean by Bayesian method]
We consider a mean of a posterior distribution by a prior distribution on functions.
The method considers a B-spline series (see <cit.> for overview and specific constructions).
With some tuple of numbers of basis (J_1,...,J_d) ∈^d and orders (q_1,...,q_d) ∈^d, we consider parameters {θ_j_1,...,j_d}_j_1,...,j_d = 1^J_1,...,J_d and the B-spline series {B_j_k,q_k(x)}_j_k = 1^J_k for k=1,...,d.
Then, the method constructs a prior distribution on a function f with the form
f(x) = ∑_j_1=1^J_1⋯∑_j_d=1^J_dθ_j_1,...,j_d∏_k=1^d B_j_k,q_k(x_k),
by putting a Gaussian prior on the parameters θ_j_1,...,j_d.
If Assumption <ref> holds with β > 0, Theorem 4.4 in <cit.> shows that ζ_n^2 = O(n^-2β/(2β + d)log^2β/(2β + d) n), which is implied by a contraction of the posterior shown by the theorem.
We can pick other methods for preprocessing.
The required property is that an error in estimating a smooth function converges in the sup-norm sense.
§ MAIN RESULT: L^∞-RISK ANALYSIS
We present our main results on the consistency of the estimator and a non-asymptotic upper bound on the estimation error with its convergence rate in n.
We further discuss the minimax optimality of the obtained convergence rate.
To achieve optimality, we need to discuss the design of the preprocessing Ŷ and the architecture of deep neural networks.
§.§ Consistency
We present an upper bound of an expectation of the L^∞-risk of the estimator.
The first result is consistency in the sense of the L^∞-risk.
In an asymptotic analysis with n →∞, a product of the depth and width of deep neural networks should also increase in n.
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class by deep neural networks with a tuple (L,W).
Suppose <ref>, and <ref> hold and f^* is continuous.
Then, there exists a tuple (L,W) with LW = o(n) such that it holds that
[f̂ - f^*_L^∞^2 ] → 0,
as n →∞.
The results show that under divergent widths and depths and appropriate preprocessing, we obtain consistency in the sense of sup-norm.
Note that f^* needs only be continuous, and conditions on derivatives are not necessary.
Also, it provides the following important implications: (i) we can control the L^∞-risk even though the deep neural network model does not have the linear-in-feature structure, and (ii) the preprocessing solves the problem of inconsistency in adversarial training presented in Section <ref>.
Its proof is based on the procedure in Section <ref>.
We note the importance of sup-norm convergence in the context of estimation.
In the theory of approximation, the sup-norm convergence by neural networks has been an important topic, that is, inf_f∈(L,W)f - f^*_L^∞→ 0 as L →∞ or W →∞, and numerous studies have studied the problem, e.g. <cit.>.
Conversely, in the nonparametric regression problem, the sup-norm convergence has been difficult due to noise in observations.
Theorem <ref> shows that the adversarial training with preprocessing enables convergence in the sup-norm.
§.§ Non-Asymptotic Bound and Convergence Rate
As a more rigorous error evaluation, we derive a non-asymptotic upper bound for the L^∞-risk of the estimator with the adversarial training.
This result is also useful in studying convergence rates of the risk and discussing its optimality.
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class (L,W) by deep neural networks.
Suppose Assumption <ref>, <ref>, and <ref> hold for some β > 0.
Then we have
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B,d,β h^-d( (WL)^2 log(WL) log n/n + (WL)^-4β/d + h^-dζ_n^2 ),
for every n ≥n̅ with some n̅∈ℕ.
This result gives some implications: (i) we develop an upper bound on the L^∞-risk of the estimator, and
(ii) the bound is proportional to h^-d, which appears when evaluating the L^∞-risk using the adversarial loss.
Note that we can select h as strictly positive and thus it does not affect an order of the bound in n.
More precisely, this upper bound consists of the three terms.
The first term O((WL)^2 log (WL) /n) is the complexity error, the second term O((WL)^-4s/d) is the approximation error by the deep neural network, and the third term O(ζ_n^2) is the error by the preprocessing.
The complexity and approximation errors also appear in several risk bounds on an L^2-risk of deep neural network (e.g., Theorem 4.3 in <cit.>).
In contrast, the preprocessing error term is a new term needed to derive an upper bound on the L^∞-risk.
We derive the convergence rate of the L^∞-risk with respect to n.
Specifically, we select the width and depth of deep neural networks in order to balance the trade-off in the error terms presented in Theorem <ref>.
Consider the setting in Theorem <ref>.
Further, suppose that ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0.
We set L and W as LW ≍ n^2β/(2β + d).
Then, we obtain the following as n →∞:
[f̂ - f^*_L^∞^2 ] = O( n^-2β / (2β + d)log^2 ∨β^* n ).
The rate obtained in Corollary <ref> is identical to the minimax optimal rate of risk measured in the sup-norm in the problem of estimating a function from ^β([0,1]^d) <cit.>.
Specifically, the derived rate corresponds to the following lower bound:
inf_f̅_nsup_f^* ∈^β([0,1]^d)[f̅_n - f^*_L^∞^2 ] = Ω̃( n^-2β / (2β + d)), (n →∞),
where f̅_n is taken from all estimators depending on the n observations.
Since the derived rate is the same as the lower bound, we show that the adversarial training estimator achieves the minimax optimal rate.
§.§ Proof Overview
We give an overview of proof of the main theorem.
As preparation, we introduce several notations related to adversarial training.
With h, an order p and a base measure P, we define an adversarial (pseudo-)norm of f: [0,1]^d → and its empirical analogue
f_P,Δ^2 := _X ∼ P[ max_x' ∈Δ_h^p(X) |f(x')|^2 ], f_n,Δ^2 := n^-1∑_i=1^n max_x' ∈Δ_h^p(X_i) |f(x')|^2.
These norms correspond to the adversarial risks with a squared loss for the regression problem (<cit.>).
We also define an approximation error of deep neural networks in (L,W) as
Φ_L,W := inf_f ∈(L,W)f - f^*_L^∞.
This term represents an expressive power of neural networks in (L,W), which decreases as L or W increase (see <cit.> for an example).
We further use a uniform covering number of (L,W).
Let Q_n be an empirical measure with n samples.
Given δ∈ (0,1],
we define a δ-covering set of (L,W) as {f_1,...,f_N}⊂ and the uniform covering number from the empirical process theory (e.g., <cit.>):
N_L,W(δ) := sup_Q_n N(δ, (L,W), ·_L^2(Q_n)),
where the supremum is taken over all possible empirical measures Q_n.
This notion is useful to evaluate the complexity of the set of deep neural networks, because it gives an upper bound without boundedness or sparsity of parameters of neural networks (See Lemma <ref>, for example).
Our proof consists of three main elements: (i) the derivation of an upper bound of the adversarial norm of the estimation error, (ii) to develop an upper bound of the L^∞ norm of the estimation error by the adversarial norm, and (iii) a comb of the above results using the localization technique.
Each of these is described below.
In the first step, we derive an upper bound for the adversarial norm of the estimation error.
Rigorously, Lemma <ref> will state the following upper bound
[f̂ - f^*_P_X, Δ^2 ] ≤ C {[f̂ - f^*_n,Δ^2] + B^2 (log N_L,W(δ) +1)/n + δ B + δ^2 },
for any δ∈ (0,1) with some universal constant C> 0.
Furthermore, Proposition <ref> will bound the empirical adversarial norm [f̂ - f^*_n,Δ^2] as
[f̂ - f^*_n, Δ^2 ] ≤ C {([f̂ - f^*_L^∞^2 ]^1/2 +δ) ( log N_L,W(δ)/n + ζ_n )^1/2 + (Φ_L,W + ζ_n )^2 }.
We achieve these bounds by extending the empirical process technique by <cit.> to the adversarial norm.
There are several points for noting: (i) the term Φ_L,W represents a bias, and the term O(log N_L,W(δ) / n) represents a variance of the estimator, that are similar to the least square estimator, (ii) the variance term is described by the uniform covering number, which is useful to study neural networks whose parameters are unbounded and non-sparse, and (iii) there is a term ζ_n which represents the error by the preprocessing, unlike the case of the least square estimator.
In the second step, we construct an upper bound for the sup-norm using the adversarial norm.
That is, we develop the following statement:
Consider the estimator as (<ref>) and the adversarial norm as (<ref>).
Suppose P_X satisfies Assumption <ref>.
Then, we have
f̂ - f^*_P_X, Δ^2≥ C_P_X,p,d h^d f̂ - f^*_L^∞^2 .
Intuitively, we utilize the similarity between the adversarial norm and the sup-norm to achieve the result.
That is, the maximization over Δ_h^p in the adversarial norm has a similar property to the sup-norm.
Using this property, we give an upper bound on the sup-norm while taking into account the volume of the hypercube.
We will give a generalized version of this result as Lemma <ref> in the supplementary material.
In the last step, we combine these results and derive the main statement of Theorem <ref>.
Here we apply the peeling argument to obtain convergence rates. Note that a simple combination of the above results would lose optimality.
To obtain the minimax optimal rate, we evaluate the approximation error and the uniform covering number based on the localization techniques.
§ APPLICATIONS
§.§ Extension to General Loss Function
§.§.§ Motivation and Setting
We can extend our adversarial training results to the case of non-squared loss functions.
Specifically, we can handle loss functions such as absolute value loss, quantile loss, and Huber loss, which are used in the presence of heavy-tailed noise.
This setting with deep neural networks is studied in <cit.>.
We introduce a generic loss function, which satisfies the following assumption:
A loss function ℓ:×→ is symmetric and ℓ(x,y) is Lipschitz-continuous in each x and y with its Lipschitz constant C_ℓ > 0.
Further, ℓ (y,x)=0 holds if and only if y=x, and there exists a constant c_ℓ > 0 and q ≥ 1 such that
ℓ(y,x) ≥ c_ℓ |y-x|^q, ∀ x,y ∈.
A class of loss function satisfying Assumption <ref> includes several representative loss functions, e.g., an absolute loss ℓ(y,x) = |y-x|, a quantile loss ℓ(y,x) = ({y ≥ x}τ + {y ≤ x}(τ - 1)) (y-x) for τ∈ (0,1), and the Cauchy loss ℓ(y,x) = log (1 + κ^2 (y-x)^2) for κ > 0.
We introduce an empirical risk function for adversarial training based on ℓ.
Using the neighbourhood set Δ_h^p(x) and the preprocessing Ŷ, we define an empirical risk function as
R̃_n(f) := 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i)ℓ(Ŷ(x'), f(x')).
This loss function is a generalized version of the ordinary loss for the adversarial training (<ref>).
Using this notion, we define its minimizer as
f̃∈_f ∈(L,W)R̃_n(f).
§.§.§ Error Analysis
We study an L^∞-risk of this estimator by deriving a non-asymptotic upper bound.
The proof differs from that of Theorem <ref>, requiring a more general treatment of loss combined with adversarial training.
Consider the regression model (<ref>) and the adversarial estimator f̃ in (<ref>) with the function class by deep neural networks with a tuple (L,W) and h ∈ (0,1).
Suppose Assumption <ref> and <ref> for β > 0, Assumption <ref> holds with ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0 and Ŷ is independent of {(X_i,Y_i)_i=1^n},
and Assumption <ref> holds with q ∈ [1,∞).
Then, we have the following as n →∞:
[f̃ - f^*_L^∞^2] = O(h^-2d/q n^-β/(q(β + d))log^ (2/q) ∨β^* n ).
This result shows that the L^∞-risk is bounded with the setup with general loss functions.
The convergence rate of Proposition <ref> of the L^∞-risk corresponds to a convergence rate of excess risks derived by Theorem 4.2 in <cit.> under general losses.
The key to this result is the bound V on [Ŷ_L^∞^2] given in Assumption <ref>.
The independence of the preprocessing Ŷ is imposed because of a technical reason, however, it is easy to satisfy it.
For example, we can randomly split the observed data into two and then conduct the preprocessing using one of the two.
The technical derivation is similar to that of Theorem <ref>.
First, we define an expected value of adversarial risk with the general loss and the preprocessing: for f ∈(L,W), we define
R(f) := _X [ sup_x' ∈Δ_h^p(X)ℓ(f(x'),Ŷ(x')) ].
Then, we derive an upper bound for an excess value of the risk R̃ (f̃) - R̃(f^*) in Proposition <ref>.
Next, we bound the L^∞-risk by properties of the expected adversarial risk as
f̃ - f^*_L^∞^q = O ( h^-d( R̃(f̃) - R̃(f^*) + Ŷ - f^*_L^∞)).
in Lemma <ref>.
This result is an extension of the bound for the L^∞-risk by the L^2-risk as shown in Lemma <ref>.
Combining the results, we obtain the result of Proposition <ref>.
§.§ Adaptation to Heterogeneous Smoothness with Besov Space
§.§.§ Motivation and Setting
In this section, we show that our proposed method can be adapted to estimate functions with heterogeneous smoothness, that is, we study the case that the true function f^* is an element of the Besov space (see <cit.> for an introduction).
The Besov space has an interesting property that linear estimators, a certain type of non-deep estimators, cannot estimate its elements with the optimal convergence rate.
First, we give the definition of the Besov space following <cit.>.
Note that there are several equivalent definitions for Besov spaces, and the following is based on the notion of difference of functions.
Consider parameters p,q ∈ (0,∞] and β > 0.
For r ∈, h ∈^d, and f:[0,1]^d →, we define an r-th difference of f at x ∈ [0,1]^d as
Δ_h^r[f](x) = {x + rh ∈ [0,1]^d}∑_j=1^r rj (-1)^r-j f(x + jh).
We also define the r-th modulus of smoothness of f with u > 0 as
ω_r,p(f,u) = sup_h_2 ≤ uΔ_h^r[f]_L^p(λ).
Recall that ·_L^p(λ) denotes the L^p-norm with the Lebesgue measure λ.
Using these notions, we define a ball in the Besov space as follows.
.
With r ∈ such that define r > β, we define a semi-norm of f: [0,1]^d → as
f__p,q^β :=
∫_0^∞ ((u^-βω_r,p(f,u))^q u^-1 du )^1/q q < ∞
sup_u > 0 u^-βω_r,p(f,u) q = ∞.
Then, we define a ball of the Besov space with its radius B ≥ 1 as
_p,q^β := { f: [0,1]^d →|f_L^p(λ) + f__p,q^β≤ B }.
The Besov space can represent functions with discontinuity and heterogeneous smoothness, which means that the degree of smoothness of functions varies depending on x.
These properties follow the fact that _1,1^1 coincides with the space of bounded total variation <cit.>.
An important property of heterogeneous smoothness is that deep estimators, such as deep neural networks, tend to have an advantage in estimating such functions.
Specifically, a linear estimator, which is one certain family of non-deep estimators <cit.>, becomes sub-optimal when estimating elements of the Besov space.
The linear estimator has a form f̂^lin(·) = ∑_i=1^n Ψ(·;X_1,...,X_n)Y_i with an arbitrary measurable map Ψ, and includes major estimators such as the kernel ridge estimator.
Then, Theorem 1 in <cit.> implies the following minimax lower bound with d=1 case:
min_f̂^linmax_f^* ∈_p,q^β[ f̂^lin - f^*_L^2(λ)^2 ] ≥ C n^-2 β' / (2β' + d ),
with some C > 0 and β' = β + 1/2 - 1/p.
For p < 2 case, the linear estimator is sub-optimal, hence the rate is slower than the minimax optimal rate Õ(n^-2 β / (2β + d )).
Several studies <cit.> show similar statements.
Therefore, it is important to estimate functions in the Besov space with deep neural networks, since it overcomes the limitations of linear estimators.
§.§.§ Error Analysis
We give a convergence rate of the adversarial estimator with deep neural networks and the preprocessing in (<ref>).
Note that we consider the adversarial risk (<ref>) based on the squared loss function.
We first give the following assumption.
There exists β > 0 such that f^* ∈_p,q^β' holds for every β' ∈ (0,β].
To estimate functions in the Besov space, we have to restrict a set of neural network functions.
Let (L,W,S,B) be a set of neural network functions (<ref>) such that there are S ∈ non-zero parameters and each value is included in [-B̅, B̅] with B≥ 1, then consider the empirical preprocessed adversarial risk (<ref>) on (L,W,S,B) as
f̂∈_f ∈(L,W,S,B) R_n(f).
Then, we give the convergence rate of the estimator, which corresponds to the minimax optimal rate Õ(n^-2 β / (2β + d )) <cit.>.
Note that this rate is valid regardless of the values of p and q.
Fix p,q ∈ (0,∞].
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class (L,W,S,B) by deep neural networks.
Suppose that Assumption <ref>, and <ref> hold with β > d/p.
Further, suppose that ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0.
We set L and W as L ≥ C_d,p,β,Blog n, S ≍ W ≍ n^d/(2β + d), and B = O(n^a) with some a > 0.
Then, we obtain the following as n →∞:
[f̂ - f^*_L^∞^2 ] = O( n^-2β / (2β + d)log^3 ∨β^* n ).
The result shows that our estimator with deep neural networks inherits the advantages of both deep and non-deep estimators.
Rigorously, first, it achieves the minimax optimal rate up to log factors.
This optimality is not achieved by the linear estimator and is one of the advantages of using deep neural networks.
Next, the errors are convergent in the sup-norm sense.
This is not shown by deep neural network estimators using the least squares, and is achieved by adversarial training with preprocessing.
Note that the requirement on the preprocessing is satisfied by, for example, the wavelet estimator with β^* = 2β / (2β + d) <cit.>.
The proof of this proposition is a slight modification of the proof of Proposition <ref> in Appendix.
The main update is an analysis of the approximation error by deep neural networks to a function in the Besov space.
Here, we apply the seminal result by <cit.> on the approximation error in the sup-norm.
§ SIMULATIONS
In this section, we conduct simulation experiments to justify the theoretical results.
Specifically, we generate data from a function and then numerically compute the L^∞-risk of the proposed estimator and other standard methods.
We generate n samples from the regression model (<ref>) with the sample size n ∈{400,800,1200,1600} and the noise variance σ^2 ∈{0.0001,0.01,1.0}.
We consider the following three cases as values of f^* on [0,1]^d.
In Case 1, we set d=1 and f^*(x) = 0.3 sin(4 π x) - x + 0.5.
In Case 2, we set d=2 and f^*(x_1,x_2) = sin(4 π x_1) + cos(2 π x_2).
In Case 3, we set d=7 and f^*(x_1,x_2,...,x_7) = 2/x_1 + 0.01 + 3 log (x_2^7 x_3 + 0.1) x_4 + 0.1 x_5^4 x_6^2 x_7.
For estimation, we use a three-layer fully-connected neural network with the ReLU activation function.
The width of each layer is 40.
For training, we use three methods: (i) adversarial training without preprocessing, (ii) adversarial training with preprocessing (our proposal), and (iii) ordinary least squares.
In the adversarial training case (i) and (ii), the value of h is set to 2^-3.
For the adversarial training, we employ the projected descent algorithm <cit.>.
For the preprocessing, we employ the k-nearest neighbor with setting k=3.
To measure the L^∞-risk, we generate 10,000 uniform random variables on the support [0,1]^d and use their maximum to approximate the risk.
Figure <ref> shows the measured L^∞-risk against the sample size n.
We have mainly three findings:
(i) In approximately all cases, our proposed estimator from adversarial training with preprocessing monotonically reduces the L^∞-risk in n.
(ii) The adversarial estimators without preprocessing may or may not be as good as those with preprocessing.
This implies that the magnitude of the bias from adversarial training depends on the shape of the true function f^*.
(iii) The L^∞-risk of the least square estimator generally decreases at a slower rate or does not decrease in all cases.
This supports the possibility that training a deep neural network with least-squares may have difficulty in reducing the L^∞-risk.
§ CONCLUSION AND DISCUSSION
We consider the nonparametric function estimator by deep neural networks that converge in the sense of the sup-norm, i.e., L^∞-norm.
Since deep neural networks do not have a tractable structure such as a linear sum of basis functions as the conventional non-deep estimators, they are not guaranteed to converge in the sup-norm sense.
In this study, we tackle this problem by considering the estimator based on adversarial training.
For the bias due to the adversarial training, we solve this problem by introducing the preprocessing for the data.
As a result, our proposed corrected adversarial converges to the smooth true function with the minimax optimal rate in the sup-norm sense.
Our approach is also valid for functions with general loss and functions with heterogeneous smoothness.
The experiments support our theoretical results.
Future research directions include sup-norm convergence for estimating non-smooth functions.
Although we expect that there are significant obstacles to the sup-norm convergence of estimators for the non-smooth functions, it is interesting to argue how far we can relax the conditions to estimate such functions.
Another direction is the application of uniform confidence bands for functions.
Our sup-norm convergence is useful to study the uncertainty of neural network estimators and constructing uniform confidence bands.
These directions may be a step toward statistical inference with deep neural networks.
§ PROOF FOR MAIN RESULT IN SECTION <REF>
§.§ Overview
We first develop a general theorem with arbitrary preprocessing, then apply the result and prove the results in Section <ref>.
For a preprocessed output Ŷ, we define its residual as
Ξ(x) := Ŷ(x) - f^*(x), x ∈ [0,1]^d.
This notion expresses an error in estimating the true function f^* by the preprocessing Ŷ.
Consider the regression model (<ref>) and the corrected adversarial estimator f̂ as (<ref>) with the function class (L,W) by deep neural networks.
Suppose that Assumption <ref> and <ref> hold.
Then, we obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B h^-d( W^2 L^2 log(WL) log n/n + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + h^-d[Ξ_L^∞^2 ] ).
We apply Lemma <ref> to bound the sup-norm as
f̂ - f^*_L^∞^2 ≤ 2(C_P_X,p,d h^d)^-1f̂ - f^*_P_X, Δ^2
Note that any f ∈(L,W) is continuous, since it has a form of deep neural network with the ReLU activation with continuity.
We then take an expectation of the bounds and apply Lemma <ref> and Proposition <ref> to obtain
[f̂ - f^*_P_X, Δ^2 ]
≤ 4 [f̂ - f^*_n,Δ^2] + 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2
≤( 16[f̂ - f^*_L^∞^2 ]^1/2 + 40 δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2 + 4Φ_L,W^2+ 8 [Ξ_L^∞] Φ_L,W + 2 [Ξ_L^∞^2 ],
for δ∈ (0,1].
Note that both f ∈(L,W) and f^* are bounded, the expectations are guaranteed to exist.
We combine this fact with the above inequality to (<ref>), then obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d h^-d( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ C_P_X,p,dh^-d( B^2 log N_L,W(δ) + B^2/n + δ B + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + [Ξ_L^∞^2 ] ),
by setting δ≤ B ∨Φ_L,W, which will be verified later.
We arrange the terms in the above inequality.
For a,b ≥ 0 and z ∈, z^2 ≤ az + b implies z^2 ≤ 3a^2 + 2b.
with regarding regard z = [f̂ - f^*_L^∞^2 ]^1/2 and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B h^-d{log N_L,W(δ)/n + δ + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + h^-d[Ξ_L^∞^2 ]
+ ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2δ}.
Further, we set δ = 1/n then Lemma <ref> shows
log N_L,W(1/n) = logsup_Q_n N(1/n, (L,W), ·_L^2(Q_n)) ≤ C W^2 L^2 log(WL) log (B n^2).
We substitute these results and obtain the statement.
Suppose P_X satisfies Assumption <ref> and f^* is continuous.
For any bounded and continuous f:[0,1]^d →, we have
f - f^*_P_X,Δ^2 ≥ C_P_X,p,d h^d f - f^*_L^∞^2 .
We apply Lemma <ref> to achieve the statement.
To apply the lemma, we verify that the map x' ↦ (f(x') - f^*(x'))^2 is bounded and continuous by the compactness of the domain [0,1]^d and the assumptions.
Then, we have
f - f^*_P_X,Δ^2 ≥ C_P_X,p,d h^d sup_x' ∈ [0,1]^d (f(x') - f^*(x'))^2 = C_P_X,p,d h^d f - f^*_L^∞^2 .
The inequality follows Lemma <ref> by setting g(·) = (f(·) - f^*(·))^2.
All f ∈ is continuous.
Suppose that f^* is continuous and f^*_L^∞≤ B holds.
Then, for any δ > 0, we have
[f̂ - f^*_P_X,Δ^2]
≤ 4 [f̂ - f^*_n,Δ^2] + 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2.
Without loss of generality, we assume that N_L,W(δ) ≥ 3 and log N_L,W(δ) ≤ n.
Also, we define the nearest element of the covering set to f̂, that is, we define ĵ := _j' = 1,...,Nsup_Q_nf_j' - f̂_L^2(Q_n).
Let X_i' be an i.i.d. samples from P_X for i = 1,...,n.
Note that Ŷ depends on X_1,...,X_n.
We give a bound on the following difference as
|[f̂ - f^*_P_X,Δ^2] - [f̂ - f^*_n,Δ^2] |
= | [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i') (f̂(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f̂(x') - f^*(x'))^2 ] |
≤| [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i') (f_ĵ(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f_ĵ(x') - f^*(x'))^2_=: g_ĵ(X_i,X_i')] |
+ 2 | [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i) (f̂(x') - f_ĵ(x') + f_ĵ(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f_ĵ(x') - f^*(x'))^2 ] |
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 4 [sup_Q_nf̂ - f_ĵ_L^2(Q_n)^2 ]^1/2[ sup_Q_nf_ĵ - f^*_L^2(Q_n)^2 ]^1/2
+ 2 [ sup_Q_nf̂ - f_ĵ_L^2(Q_n)^2]
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 4 δ[ f_ĵ - f^*_L^∞^2 ]^1/2+ 2 δ^2
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 8 δ B + 2δ^2.
Here, the second last inequality follows Lemma <ref> using the continuity of f^* and the f ∈.
The last inequality follows the definition of ĵ and the boundedness of f ∈ and f^* by B.
We further study the first term of the bound (<ref>).
As preparation, we define
r_j = Bmax{[f_j - f^*_P_X,Δ^2 ]^1/2 , (n^-1log N_L,W(δ))^1/2},
for j=1,...,N, and it yields
r_ĵ ≤ B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f_ĵ(x') - f^*(x'))^2 ]^1/2 + B (n^-1log N_L,W(δ))^1/2
≤ B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2 +B (n^-1log N_L,W(δ))^1/2 + Bδ.
Here, _X| X_1:n, Y_1:n[ · ] denotes a conditional expectation with given X_1,...,X_n and Y_1,...,Y_n.
By the law of iterated expectation, the first term of the bound is decomposed as
| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] |
= 1/n| [ ∑_i=1^n g_ĵ(X_i,X_i') /r_ĵ_=: g̃_ĵ(X_i,X_i')r_ĵ] |
≤1/n| [ ∑_i=1^n g̃_ĵ(X_i,X_i')( B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2 +B (n^-1log N_L,W(δ))^1/2 + Bδ)] |
≤1/n| [ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ( B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2)] |
+ B/n| [ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ( (n^-1log N_L,W(δ))^1/2 + δ)]^1/2|
≤B/n| [ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2 ]^1/2[f̂ - f^*_P_X,Δ^2 ]^1/2|
+ B/n[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')]((n^-1log N_L,W(δ))^1/2 + δ)
≤B/n(36 n log N_L,W(δ) + 256 n)^1/2[ f̂ - f^*_P_X,Δ^2]^1/2+ B/n (6 log N_L,W(δ) + 11).
The first inequality follows (<ref>) and the second last inequality follows the Cauchy-Schwartz inequality.
We also apply Lemma <ref> and 1 ≤log N_L,W(δ) ≤ n to achieve the last inequality.
We substitute the result (<ref>) into the bound (<ref>), then obtain the inequality:
|[f̂ - f^*_P_X,Δ^2] - [f̂ - f^*_n,Δ^2] |
≤B/n(36 n log N_L,W(δ) + 256 n)^1/2[ f̂ - f^*_P_X,Δ^2]^1/2 + B/n (6 log N_L,W(δ) + 11) + 8 δ B + 2δ^2.
We rearrange the term and obtain that
[f̂ - f^*_P_X,Δ^2]
≤ 2 ([f̂ - f^*_n,Δ^2] + B/n (6 log N_L,W(δ) + 11) + 8 δ B + 2δ^2 ) + 8B^2(36 n log N_L,W(δ) + 256 n)/n^2.
Then, we obtain the statement.
Suppose that N_L,W(δ) ≥ 3.
For the function g̃_j(X_i,X_i') defined in the proof of Lemma <ref>, we have
[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')] ≤ 6 (n log N_L,W(δ))^1/2 + 32 n^1/2/ 3(log N_L,W(δ))^1/2,
and
[ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2] ≤ 36 n log N_L,W(δ) + 256 n.
We first note that for any j = 1,...,N_L,W(δ), we have [g̃_j(X_i,X_i')] = 0, |g̃_j(X_i,X_i')| ≤ 4B^2 /r_j ≤ 4 n^1/2/ (log N_L,W(δ))^1/2 =: M, and
(g̃_j(X_i,X_i')) = 2 r_j^-2( sup_x' ∈Δ_h^p(X_1) (f_j(x') - f^*(x'))^2 )
≤ 2 r_j^-2[ ( sup_x' ∈Δ_h^p(X_1) (f_j(x') - f^*(x'))^2 )^2]
≤ 8 r_j^-2[f_j - f^*_P_X,Δ^2] B^2
≤ 8.
The second inequality follows Hölder's inequality.
Using the bounds above, we apply the Bernstein inequality as
( ∑_i=1^n g̃_j(X_i,X_i') ≥ t) ≤exp( - t^2/2t M/3 + 2n (g̃_j(X_1,X_1')))
≤exp( - t^2/8t n^1/2(log N_L,W(δ))^-1/2 /3 + 16n)
≤exp( - t^2/16t n^1/2(log N_L,W(δ))^-1/2 /3)
= exp( - 3t (log N_L,W(δ))^1/2/16 n^1/2),
for t ≥ 6 (n log N_L,W(δ))^1/2.
The last inequality follows 8t n^1/2(log N_L,W(δ))^-1/2 /3 ≥ 16n for t larger than the threshold 6 (n log N)^1/2.
Using the result (<ref>) associated with t ≥ 6 (n log N_L,W(δ))^1/2, we bound the following expectation:
[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')]
= ∫_0^∞( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ≥ t)dt
≤ 6 (n log N_L,W(δ))^1/2 + 2N_L,W(δ) ∫_6 (n log N_L,W(δ))^1/2^∞max_j =1,...,N_L,W(δ)( ∑_i=1^n g̃_j(X_i,X_i') ≥ t)dt
≤ 6 (n log N_L,W(δ))^1/2 + 2N_L,W(δ) ∫_6 (n log N_L,W(δ))^1/2^∞exp( - 3t (log N_L,W(δ))^1/2/16 n^1/2)dt
≤ 6 (n log N_L,W(δ))^1/2 + 32 n^1/2/ 3(log N_L,W(δ))^1/2.
Then, the first statement is proved.
For the second statement, we similarly apply (<ref>) and obtain
Using the result (<ref>) associated with t ≥ 6 (n log N_L,W(δ))^1/2, we bound the following expectation:
[ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2]
= ∫_0^∞( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ≥ t^1/2)dt
≤ 36 n log N_L,W(δ) + 2N_L,W(δ) ∫_6 n log N_L,W(δ)^∞max_j =1,...,N_L,W(δ)( ∑_i=1^n g̃_j(X_i,X_i') ≥ t^1/2)dt
≤ 36 n log N_L,W(δ) + 2N_L,W(δ) ∫_6 n log N_L,W(δ)^∞exp( - 3t^1/2 (log N_L,W(δ))^1/2/16 n^1/2)dt
≤ 36 n log N_L,W(δ) + 256 n.
Then, the second statement is also proved.
Consider the setting in Theorem <ref>.
Then, for any δ∈ (0,1], we have
[f̂ - f^*_n,Δ^2] ≤( 4[f̂ - f^*_L^∞^2 ]^1/2 + 10δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ Φ_L,W^2+ 2 [Ξ_L^∞] Φ_L,W + 2 [Ξ_L^∞^2].
By the definition of the minimization problem, L_n(f̂) ≤L_n(f) holds for any f ∈(L,W), hence we have the following basic inequality as
1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (Ŷ(x') - f̂(x'))^2 ≤1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (Ŷ(x') - f(x'))^2,
which can be rewritten as
1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (f^*(x') + Ξ(x') - f̂(x'))^2 ≤1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (f^*(x') + Ξ(x') - f(x'))^2.
We bound the both-hand side of (<ref>).
The left-hand side (LHS) of (<ref>) is lower bounded as
= 1/n∑_i=1^n max_x' ∈Δ_h^p(X_i){ (f^*(x') - f̂(x'))^2 + Ξ(x')^2 + 2 Ξ(x') (f^*(x') - f̂(x'))}
≥f^* - f̂_n,Δ^2 - Ξ_n,Δ^2 - 2/n∑_i=1^nmax_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f̂(x'))|,
by applying Lemma <ref>.
Similarly, we bound the right-hand side of (<ref>) as
= 1/n∑_i=1^n max_x' ∈Δ_h^p(X_i){ (f^*(x') - f(x'))^2 + Ξ(x')^2 + 2 Ξ(x') (f^*(x') - f(x'))}
≤f^* - f_n,Δ^2 + Ξ_n,Δ^2 +2/n∑_i=1^n max_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f(x'))|.
Combining (<ref>) and (<ref>) with (<ref>), we obtain
f^* - f̂_n,Δ^2 ≤f^* - f_n,Δ^2 + 2 Ξ_n,Δ^2 + 2/n∑_i=1^nmax_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f̂(x'))| _=: T_1
+ 2/n∑_i=1^n max_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f(x'))|
≤Φ_L,W^2 + 2 Ξ_L^∞^2 + T_1 + 2 Ξ_L^∞Φ_L,W,
by the definition of Φ_L,W in (<ref>).
We will bound an expectation the terms.
Note that the expectations of the terms are guaranteed to exist, by the boundedness of f^* and f̂,f ∈(L,W), and Ŷ.
We bound [T_1].
We define the nearest element of the covering set to f̂, that is, we define ĵ := _j' = 1,...,Nsup_Q_nf_j' - f̂_L^2(Q_n).
Then, we bound [T_1] as
[T_1] = [ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x') + f_ĵ(x') - f̂(x'))| ]
≤[ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))| ] + [ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') ( f_ĵ(x') - f̂(x'))| ]
≤[ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))| f̂ - f^*_L^∞ + δ/f_ĵ - f^*_L^∞]
+ 2 [ sup_Q_nΞ_L^2(Q_n)^2 ]^1/2[ sup_Q_nf_ĵ - f̂_L^2(Q_n)^2]^1/2
≤[ (f̂ - f^*_L^∞ + δ) 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))|/f_ĵ - f^*_L^∞_=: Z_ĵ] + 2 [Ξ_L^∞^2 ]^1/2δ.
Since we have
|Z_j| ≤2/n∑_i=1^n | max_x' ∈Δ_h(X_i){| Ξ(x') | | (f^*(x') - f_j(x'))| }/f_j - f^*_L^∞| ≤ 2Ξ_L^∞,
for any j = 1,...,N,
the Cauchy-Schwartz inequality yields
[ (f̂ - f^*_L^∞ + δ) Z_ĵ] ≤[ (f̂ - f^*_L^∞ + δ)^2 ]^1/2[ Z_ĵ^2 ]^1/2
≤ 2( [f̂ - f^*_L^∞^2 ]^1/2 + δ)[ max_j=1,...,N_L,W(δ) Z_j^2 ]^1/2
≤ 4( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ) + [ Ξ_L^∞^2 ]/n)^1/2.
The last inequality follows the maximal inequality (Theorem 3.1.10 in <cit.>) for the bounded random process.
Using this result, we obtain
[T_1] ≤ 4 ( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ) + [ Ξ_L^∞^2 ]/n)^1/2 + 2 [Ξ_L^∞^2 ]^1/2δ
≤( 4[f̂ - f^*_L^∞^2 ]^1/2 + 10δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2.
We substitute the bound (<ref>) into the expectation of (<ref>), then obtain the statement.
Fix ε > 0 arbitrary.
Also, we fix C_* = C_P_X,p,d,B as used in the statement of Proposition <ref>.
By the universal approximation theorem (e.g. Theorem 1 in <cit.>) associate with the continuity of f^*, there exists a tuple (L',W') such that
Φ_L',W'≤√(ε h^d/( 4C_*)).
Further, by Assumption <ref>, there exists n̅∈ such that
[Ξ_L^∞^2] ≤√(ε h^2d/(4 C_*)).
Then, for all n ≥n̅, Proposition <ref> yields that
[f̂ - f^*_L^∞^2 ] ≤ C_* h^-d(W'L')^2 log(W'L') log n/n + 3 ε/4.
Then, for any n ≥ n' ∨ (4 C_* (W'L')^2 log(W'L') h^-dε^-1), we have [f̂ - f^*_L^∞^2 ] ≤ε/4 + 3ε/4 = ε, which shows the statement.
As preparation, Lemma <ref> gives the following bound
Φ_L,W≤ C_d,β (LW)^-2β/d.
With this bound on Φ_L,W, we apply Proposition <ref> and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B,d,β h^-d( (WL)^2 log(WL) log n/n + (LW)^-4β/d+ [Ξ_L^∞] (LW)^-2s/d + h^-d[Ξ_L^∞^2] ).
Further, we have
(LW)^-4β/d+ [Ξ_L^∞] (LW)^-2s/d + h^-d[Ξ_L^∞^2] ≤{(LW)^-2β/d + h^-d/2[Ξ_L^∞^2]^1/2}^2,
by applying Jensen's inequality.
Arranging the terms, we obtain the statement.
We start with the inequality (<ref>) in the proof of Theorem <ref> and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B,d,β h^-d( n^-2β/(2β+d) (log^2 n + 1) + [Ξ_L^∞] n^-β/(2β+d) + h^-d[Ξ_L^∞^2] )
by the setting WL ≍ n^d/(4s + 2d).
§ PROOF FOR APPLICATIONS
§.§ Proof for General Loss Setting
We give proofs of the result in Section <ref>.
Consider the setting in Proposition <ref>.
Then, we have for n such that log N(1/n) ≥ 1:
[R̃ (f̃) - R̃(f^*)] ≤C_ℓ, B ( log N_L,W(1/n) + V^2 )/n^1/2 + C_ℓ (Φ_L,W + [Ξ_n_L^∞]).
This proof is similar to Lemma 3.1 in <cit.>.
A difference between <cit.> and our result is that a property of the loss depends on f in our setting, so we have to modify it.
Hence, we write down the proof.
We develop the proof in the following four steps: (i) a basic decomposition, (ii) bounding a variance, (iii) bounding a bias, and (iv) combining every bound.
Step 1: Basic decomposition.
We define i.i.d. copies of the observations D := {(X_i,Y_i)_i=1^n} as D' := {(X_i',Y_i')_i=1^n}, and also define an excess loss as
g(x,Ŷ,f) = sup_x' ∈Δ_h^p(x)ℓ(f(x'), Ŷ(x')) - sup_x' ∈Δ_h^p(x)ℓ(f^*(x'), Ŷ(x'))
We further define empirical means of the excess loss as G_n(f) := n^-1∑_i=1^n g(X_i,Ŷ,f) with the observations D, and G_n'(f) := n^-1∑_i=1^n g(X_i',Ŷ,f) with the copies D'.
Since f̂ is independent to D', we can rewrite the expected risk as
[R̃(f̂) - R̃(f^*)] = [ _D'[G_n'(f̂) ]].
Since f̂ is the minimizer of the empirical risk and the loss is bounded, we obtain the following inequality of expectations:
[G_n(f̂)] ≤[G_n(f) ],
for any f∈(L,W).
We set set f such that f - f^* _L^∞ = inf_f ∈(L,W)f - f^*_L^∞
Using this fact, we decompose the excess risk as
[R̃(f̂) - R̃(f) ] = [ _D'[ G_n'(f̂)]] ≤[ - 2G_n(f̂) + _D'[ G_n'(f̂)]_=:] + 2[ G_n(f)_=: ].
The inequality follows (<ref>).
Step 2: Bound the variance [].
We bound an expectation of the term .
By the boundedness of both Ŷ and f̃ by Assumption <ref> and (<ref>), the expectation [] exists.
We prepare additional notations.
Fix δ∈ (0,1].
We consider a covering set {f_j}_j=1^N_L,W(δ)⊂, then we pick f_j from the set such that sup_Q_nf_j - f̃_L^2(Q_n)≤δ.
We define a term g̃(X_i,Ŷ,f̃) by the following reform of as
= 1/n∑_i=1^n {_D'[ G_n'(f̃)] - 2 g(X_i,Ŷ,f̃) } =: 1/n∑_i=1^ng̃(X_i,Ŷ,f̃),
which yields the following form
[] = [1/n∑_i=1^ng̃(X_i,Ŷ,f̃)]
= [1/n∑_i=1^ng̃(X_i,Ŷ,f_j)_:= _1] + [1/n∑_i=1^ng̃(X_i,Ŷ,f̃)- 1/n∑_i=1^ng̃(X_i,Ŷ,f_j)_=: _2] .
We will bound both [_1] and [_2], separately.
We bound the term [_2].
Since g in (<ref>) is Lipschitz continuous in f with its Lipschitz constant C_ℓ by Lemma <ref>, we easily see that g̃ is Lipschitz continuous in f with its Lipschitz constant 6C_ℓ.
Thus, we obtain that
[_2] ≤| [1/n∑_i=1^ng̃ (X_i,Ŷ,f̃)] - [1/n∑_i=1^ng̃ (X_i, Ŷ,f_j)] | ≤ 6 C_ℓδ.
Next, we bound the term [_1].
Here, we need to consider a uniformly bounded function y:[0,1]^d → [-B,B]
For each f_j in the covering set, t > 0, and the bounded function y, we use the Bernstein inequality to derive its stochastic upper bound.
As preparation, we consider a threshold B_n ≥ 1 depending on n and define a clipped preprocessing Ŷ_B_n(·) := max{min{Ŷ(·), B_n}, -B_n}.
We firstly approximate [_1] by the Lipschitz continuity of ℓ as
[_1] ≤[1/n∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)] + 6 C_ℓ[Ŷ - Ŷ_B_n_L^∞].
Since |Ŷ(x) - Ŷ_B_n(x)| = |Ŷ(x)| {|Ŷ(x)| ≥ B_n} holds, we can bound the expectation in the second term of the right-hand side as
[Ŷ - Ŷ_B_n_L^∞] = [ sup_x ∈ [0,1]^d |Ŷ(x)| {|Ŷ(x)| ≥ B_n}|]
≤[ sup_x ∈ [0,1]^d |Ŷ(x)| sup_x ∈ [0,1]^d{|Ŷ(x)| ≥ B_n}|]
≤[Ŷ_L^∞{Ŷ_L^∞≥ B_n}]
≤[Ŷ_L^∞^2 / B_n].
The last inequality follows {x ≥ 1}≤ x for any x ≥ 0.
The existence of the second moment is guaranteed by Assumption <ref>.
We put this result to (<ref>) and obtain
[_1] ≤[1/n∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)] + 6 C_ℓ[Ŷ_L^∞^2 / B_n].
Then, we bound the first term [n^-1∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)].
Since we have |g(x,Ŷ_B_n,f)| ≤ C_ℓ ( B_n ∨ B) for any x ∈ [0,1]^d and f: f_L^∞≤ B, we obtain the following inequality with fixed Ŷ_B_n:
( 1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t)
=(_D'[ g(X_i',Ŷ_B_n,f_j)] - 2/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t )
=(_D'[ g(X_i',Ŷ_B_n,f_j)] - 1/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t/2 + 1/2_D'[ g(X_i',Ŷ_B_n,f_j)] )
≤(_D'[ g(X_i',Ŷ_B_n,f_j)] - 1/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t/2 + 1/2_D'(g(X_i, Ŷ_B_n, f_j))/4 C_ℓ B_n)
≤exp( - n(t')^2/2 _D'(g(X_i, Ŷ_B_n, f_j)) + 16 C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - n(t')^2/2 t' C_ℓ ( B_n ∨ B) + C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - n(t')^2/16 t' C_ℓ ( B_n ∨ B) + 16 C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - 3 n t'/64 C_ℓ ( B_n ∨ B))
≤exp( - 3 n t/128 C_ℓ ( B_n ∨ B)).
The first and third inequalities follow _D'(g(X_i, Ŷ_B_n, f_j)) ≤ 4 C_ℓ B_n _D'[g(X_i, Ŷ_B_n, f_j)], and the second and last inequalities follows a setting t' = t/2 + _D'(g(X_i, Ŷ_B_n, f_j))/(8 C_ℓ (B ∨ B_n)).
Using this inequality for a uniform bound in terms of the covering set {f_j}_j=1^N_L,W(δ) and the independent random functions Ŷ and Ŷ_B_n, we obtain
( max_j = 1,...,N_L,W(δ)1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t ) ≤ N_L,W(δ) exp( - 3nt/128 C_ℓ ( B_n ∨ B) t ).
Then, by the maximal inequality (Corollary 2.2.8 in <cit.>), for any η > 0, we have
[max_j=1,...,N_L,W(δ)[1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j)]]
≤η + ∫_η^∞( max_j = 1,...,N_L,W(δ)1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t ) dt
≤η + ∫_η^∞ N_L,W(δ) exp( - 3nt/128 C_ℓ ( B_n ∨ B) t ) dt
≤η + N_L,W(δ) (128 C_ℓ ( B_n ∨ B))/3nexp( - 3 n η/ 128 C_ℓ ( B_n ∨ B) ) .
We set B_n = n^1/2, hence we have (B ∨ B_n) ≤ C_B n^1/2.
Also, we set η = (128 C_B,ℓ n^1/2) log N_L,W(δ) / (3n) and put this result into (<ref>), we obtain
[_1] ≤[max_j=1,...,N[1/n∑_i=1^ng̃ (X_i,Ŷ,f_j)]] ≤C_ℓ,B (log N_L,W(δ) + [Ŷ_L^∞^2 ])/n^1/2.
Combining the inequalities (<ref>) and (<ref>) into (<ref>) and set δ = 1/n, we obtain
[] ≤(2 C_ℓ^2 B_2 + C_ℓ B/3) (log N_L,W(1/n) + [Ŷ_L^∞^2 ])/n^1/2.
Step 3: Bound the bias [].
By the Lipschitz continuity of the loss ℓ by Assumption <ref>, we have
[] = [ 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i)ℓ( f̅(x'), Ŷ(x')) ]
≤[ sup_x ∈[0,1]^dℓ( f̅(x), Ŷ(x)) ]
≤[sup_x' ∈[0,1]^d C_ℓ |f̅(x) - Ŷ(x)| + ℓ(Ŷ(x), Ŷ(x)) ]
≤ C_ℓ[f̅ - Ŷ_L^∞]
≤ C_ℓ (f̅ -f^*_L^∞ + [f^*- Ŷ_L^∞ ])
≤ C_ℓ (Φ_L,W + [Ξ_n_L^∞]).
The last inequality holds by setting f such that f - f^* _L^∞ = inf_f ∈(L,W)f - f^*_L^∞.
Step 4: Combining the bounds.
We combine the result in Step 3 and Step 4 into the decomposition (<ref>), then obtain the statement.
Consider the expected adversarial risk R̃(·) with general losses as (<ref>).
Then, for the estimator f̃ as (<ref>) and q ∈ [1,∞), we have
f^* - f̃_L^∞^q ≤ C_P_X,p,d,ℓ,q h^-d( R̃(f̃) - R̃(f^*) + Ξ_L^∞^q ∨Ξ_L^∞).
We develop a lower bound of R̃(f̃) - R̃(f^*) as
R̃(f̃) - R̃(f^*) = _X[sup_x' ∈Δ_h^p(X)ℓ(Ŷ(x'), f̃(x')) - sup_x' ∈Δ_h^p(X)ℓ(Ŷ(x'), f^*(x')) ]
≥ C_P_X,p,d h^d sup_x ∈ [0,1]^d |ℓ(Ŷ(x'), f̃(x'))| - C_ℓŶ - f^*_L^∞
≥ C_P_X,p,d,ℓ h^d Ŷ - f̃_L^∞^q - C_ℓΞ_L^∞
≥ C_P_X,p,d,ℓ,q h^d ( f^* - f̃_L^∞^q - Ξ_L^∞^q ) - C_ℓΞ_L^∞ .
Here, the first inequality follows Lemma <ref> and the Lipschitz continuity of ℓ by Assumption <ref>, and the last inequality follows (a+b)^q ≤ C_q (a^q + b^q) for q ∈ [1,∞) and a,b ≥ 0.
By Proposition <ref> and Lemma <ref>, we have
[f^* - f̃^2_L^∞] ≤ C_P_X, p,d,ℓ,q h^-2d/q( [(R̃(f̃) - R(f^*))^2/q] + [ Ξ_L^∞^2] )
≤ C_P_X,B, p,d,ℓ,q, h^-2d/q{(log N_L,W(1/n) /n^1/2)^2/q + Φ_L,W^2/q + ζ_n^2 }
≤ C_P_X,B, p,d,ℓ,q,V h^-2d/q{( W^2L^2 log(WL) log n /n^1/2)^2/q + Φ_L,W^2/q + ζ_n^2 }.
The last inequality follows Lemma <ref>.
We set WL ≍ n^d/(4β + 4d) and obtain the statement.
§.§ Proof of Adaptation to Besov Space
We give proof of the result in Section <ref>.
To show the statement, we slightly modify the proof of Proposition <ref>.
We start from the inequality (<ref>) with setting δ = 1/n.
Since we use (L,W,S,B) as a set of candidate functions instead of (L,W), we obtain the following updated inequality of (<ref>) as
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B h^-d{logÑ_L,W,S,B(1/n)/n + Φ̃_L,W,S,B^2 + ζ_n^2 },
which replaces N_L,W(1/n) by Ñ_L,W,S,B(1/n) := sup_Q_n N(1/n, (L,W,S,B), ·_L^2(Q_n)) and Φ_L,W by Φ̃_L,W,S,B := inf_f ∈(L,W,S,B)f - f^*_L^∞.
We study the terms Ñ_L,W,S,B(1/n) and Φ̃_L,W,S,B.
For the approximation error term Φ̃_L,W,S,B, we apply Lemma <ref> by setting r = ∞ and obtain
Φ̃_L,W,S,B≤ C_d,β N^-β/d,
for sufficiently large N such that L ≥ C_d,p,β,Blog (N), W = C_d,βN, S=(L-1)C_d,βN + N.
About the entropy term Ñ_L,W,S,B(1/n), we apply Lemma <ref> and obtain
logÑ_L,W,S,B(1/n) ≤log N(1/n, (L,W,S,B), ·_L^∞)
≤ LS log(n LB(1+S))
≤ C_d,β L^2 N log (n L^2 B N)
≤ C_d,p,β,B N log^2(N) log (nN log^2(N)),
by substituting the setup of L,S,W and B.
We substitute (<ref>) and (<ref>) into (<ref>) and obtain
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B,β h^-d{ N log^2(N) log (nN log^2(N))/n + N^-2β/d + ζ_n^2 }.
We set N ≍ n^d/(2β + d) and obtain the statement.
§ SUPPORTIVE RESULT
Consider a non-negative bounded continuous function g:[0,1]^d →_+.
Then, we have
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥g_L^∞ P_X(Δ_h^p(x^*)),
with x^* ∈_x ∈ [0,1]^d g(x).
Further, if Assumption <ref> holds, then we have
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥g_L^∞ h^d C_P_X,p,d.
Let A := {x ∈ [0,1]^d | g(x) = max_x' ∈ [0,1]^d g(x')} be a set of argmax of g(x), which is non-empty because of the compactness of [0,1]^d and boundedness/continuity of g.
Also, we define a union Δ_A := ∪_x ∈ AΔ_h^p({x}).
By the non-negativity of g, we obtain
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥_X[sup_x' ∈Δ_h^p(X) g(x') {X ∈Δ_A }]
= _X[sup_x ∈ [0,1]^d g(x) {X ∈Δ_A }]
= g_L^∞ P_X(Δ_A).
Hence, we obtain the first statement.
We consider that Assumption <ref> holds.
We develop a lower bound of P_X(Δ_A) as
P_X(Δ_A) ≥inf_x ∈ A P_X( Δ_h^p({x})) ≥ C_P_Xinf_x ∈ A P_X( Δ_h^p({x})) ≥ C_P_Xinf_x ∈ [0,1]^dλ( Δ_h^p({x})),
where C_P_X is a lower bound of a density function of P_X defined in Assumption <ref>, and λ(·) is the Lebesgue measure.
Since the Lebesgue of the L^p-ball is known, we obtain that
inf_x ∈ [0,1]^dλ( Δ_h^p({x})) = Γ(1/p + 1)^d/Γ(d/p + 1)h^d,
where Γ (·) is the Gamma function.
Then, we obtain the second statement.
We develop the following covering number bound.
The following lemma immediately holds by <cit.> and <cit.>.
Consider the set of deep neural networks as (<ref>) with the depth L, the width W, and the upper bound B.
For any δ > 0 and sufficiently large n, we have
log N(δ, (L,W), ·_L^2(P_n)) ≤ C W^2 L^2 log(WL) log (B n /δ).
Let D be the VC-dimension of , and S(≤ W^2 L) be a number of parameters in .
By Theorem 3 in <cit.>, we bound the VC-dimension as D = O(S L log(S)) ≤ O(W^2 L^2 log (WL)).
Using this inequality and Theorem 12.2 in <cit.>, we have
log N(δ, (L,W), ·_L^2(P_n)) ≤ D log( en B/δ D) ≤ C W^2 L^2 log(WH) log (B n /δ).
for n = Ω(W^2 H^2 log (WH)).
Consider a non-empty compact set T ⊂^d with some d and continuous bounded functions f,f':T →.
Then, we have
|sup_t ∈ T(f(t) + f'(t))^2 - sup_t ∈ Tf(t) | ≤f_L^∞f'_L^∞ + f'_L^∞^2.
We define the optimal values t^* ∈ T and t^†∈ T such that sup_t ∈ T(f(t) + f'(t))^2 = (f(t^*) + f'(t^*))^2 and sup_t ∈ Tf(t) ^2 = f(t^†)^2.
Note that such t^* ∈ T and t^†∈ T exist by the compactness of T and the continuity of f and f'.
We first derive the following inequality
sup_t ∈ T(f(t) + f'(t))^2 - sup_t ∈ Tf(t) ^2 ≤ f(t^*)^2 + 2 f(t^*)f'(t^*) + f'(t^*)^2 - f(t^*)^2
≤ 2 f_L^∞f'_L^∞ + f'_L^∞^2.
Second, we develop a bound for the opposite side as
sup_t ∈ Tf(t)^2 - sup_t ∈ T(f(t) + f'(t))^2 ≤ f(t^†)^2 - (f(t^†) + f'(t^†))^2
≤ 2f(t^†) f'(t^†) - f'(t^†)^2
≤ 2 f_L^∞f'_L^∞ + f'_L^∞^2.
Then, we obtain the statement.
For any continuous and bounded functions f,g on a compact set I, we have
max_t ∈ I (f(t) + g(t)) ≥max_t ∈ I f(t) - max_t ∈ I |g(t)|.
Let t' ∈ I be a point such that max_t ∈ I (f(t) + g(t)) = f(t') + g(t'), which is guaranteed to exist by the compactness of I and the boundedness/continuity of f,g.
The statement simply follows
max_t (f(t) + g(t)) = f(t') + g(t') ≥ f(t') - |g(t')| ≥max_t(f(t)) - max_t |g(t')|.
Consider functions f,f', y: [0,1]^d → [-B,B], and a loss function ℓ satisfying Assumption <ref>.
Also, consider a funciton g as (<ref>).
For any x ∈ [0,1]^d, we have
g(x,y,f) - g(x,y,f') ≤ C_ℓ |f(x̅) - f'(x̅)|,
for some x̅∈ [0,1]^d.
We define x^* ∈Δ_h^p(x) such that ℓ(y(x^*), f(x^*)) = max_x' ∈Δ_xℓ(y(x'), f(x')).
Its existence follows the continuity of f, f',y, and ℓ.
For f,f' ∈ L^2([0,1]^d), we have
g(x,y,f) - g(x,y,f') = max_x' ∈Δ_h^p(x)ℓ(y(x'),f(x')) -max_x' ∈Δ_h^p(x)ℓ(y(x'),f'(x'))
≤ℓ(y(x^*),f(x^*)) - ℓ(y(x^*),f'(x^*))
≤ C_ℓ |f(x^*) - f'(x^*)|.
The first inequality follows max_x' ∈Δ_h^p(x)ℓ(y(x'), f(x')) = ℓ(y(x^*), f(x^*)), and the second inequality follows the Lipschitz continuity of ℓ in the second argument from Assumption <ref>.
Thus, we obtain the statement.
Fix N,M ∈ arbitrarily.
If (L,W) is a set of functions with W= C_d (N+2) log_2 (8N) and L= C_s (M+2) log_2 (4M) + 2d, we have
inf_f ∈sup_f^* ∈ C^s_1([0,1]^d)f - f^*_L^∞≤ C_d,s N^-2s/d M^-2s/d.
Fix p,q,r∈ (0, ∞] and β∈ (0,∞).
Suppose that β > d max{1/p-1/r, 0} holds.
Let (L,W,S,B) be a set of neural network functions (<ref>) such that there are S ∈ non-zero parameters and each value is included in [-B̅, B̅] with B≥ 1.
Let N be a sufficiently large number and set L ≥ C_d,p,β,Blog (N), W = C_d,βN, S=(L-1)C_d,βN + N, and B̅ is a polynomially increasing in N.
Then, we have
sup_f^0 ∈_p,q^βinf_f ∈(L,W,S,B)f^0 - f_L^r(λ)≤ C N^-β/d,
with some constant C > 0 independent of N.
For ε∈ (0,1], we obtain
log N(ε, F(L,W,S,B)) ≤ LS log(ε^-1 LB(1+S)).
§ PROOF OF INCONSISTENCY
We first specify the coordinates of the setting.
We consider two points x = (0.3, 0.5, 0.5, ...,0.5), x' = (0.7,0.5, 0.5, ...,0.5)∈ [0,1]^d, and a marginal measure as a mixture of Dirac measures on the points; P_X = 0.5 δ_{x} + 0.5 δ_{x'}.
We also specify the true function with an input x = (x_1,...,x_d) ∈ [0,1]^d as f^*(x) = - {x_1 < 0.4} + 10 (x_1 - 0.5){0.4 ≤ x_1 ≤ 0.6} + {x_1 > 0.6}, and the noise variable ξ_i as a uniform random variable on [-0.1,0.1].
For the adversarial training, we set p=∞ and h = 0.5.
We study an empirical risk minimizer in this setting.
Since the inputs X_1,...,X_n are either of x or x', we set n_1 := |{i: X_i = x}| and n_2 := |{i: X_i = x'}| such that n = n_1 + n_2.
With the specified coordinates above, we rewrite an empirical risk of f:[0,1]^d → with the adversarial training as
1/n∑_i=1^n max_x ∈Δ_h^p(X_i) (Y_i - f(x))^2
=1/n∑_i: X_i = xmax_x ∈Δ_h^p(X_i) (f^*(X_i) + ξ_i - f(x))^2 + 1/n∑_i: X_i = x'max_x ∈Δ_h^p(X_i) (f^*(X_i) + ξ_i - f(x))^2
=1/n∑_i: X_i = xmax_x ∈ [0,1]^d: x_1 ∈ [0,0.8] (ξ_i - f(x))^2 + 1/n∑_i: X_i = x'max_x ∈ [0,1]^d: x_1 ∈ [0.2,1] (1 + ξ_i - f(x))^2,
which follows f^*(x) = 0 and f^*(x') = 1.
To minimize this empirical risk in terms of f, we restrict a class of f.
Specifically, we set f with an input x = (x_1,...,x_d) as having a form f(x) = c_1 {x_1 ≤ 0.2} + c_2 {0.2 < x_1 < 0.8} + c_3 {0.8 ≤ x_1} with some constants c_1,c_2,c_3 ∈.
This form of f can minimize the risk, since The risk depends only on the value of f for each region.
Then, we rewrite the risk as
(<ref>) =1/n∑_i: X_i = xmax{ (ξ_i - c_1)^2 , (ξ_i - c_2)^2} + 1/n∑_i: X_i = x'max{ (1 + ξ_i - c_2)^2 , (1 + ξ_i - c_3)^2 }.
Here, we consider an event |n_1/2 - n/2| ≤ 1, which appears with probability 1-2 exp(-2/n) ≥ 0.5 with n ≥ 3, by Hoeffding's inequality.
In this case, a simple calculation yields that c_2 ∈ [-0.2, 0.2] minimizes the (<ref>) since it prevents quadratic growth of the risk in terms of c_2, which gives (ξ_i - c_1)^2 < (ξ_i - c_2)^2 and (1 + ξ_i - c_2)^2 > (1 + ξ_i - c_3)^2.
Then, we rewrite the risk (<ref>) as
(<ref>) = 1/n∑_i: X_i = x (ξ_i - c_2)^2 + 1/n∑_i: X_i = x'(1 + ξ_i - c_2)^2,
and the minimization on it by c_2 yields the following optimal choise
c_2^* = n_2 - n_1/n + 1/n∑_i=1^n ξ_i.
Then, we have that the original risk (<ref>) is minimized by the following function
f̌(x) := c_1^* {x_1 ≤ 0.2} + c_2^* {0.2 < x_1 < 0.8} + c_3^* {0.8 ≤ x_1},
where c_1^* = n_1^-1∑_i: X_i = xξ_i and c_3^* = n_2^-1∑_i: X_i = x'ξ_i.
Finally, we define the L^∞-risk of f̌.
Simply, we have
f̌ - f^*_L^∞^2 ≥f̌ - f^*_L^2(P_X)^2
= _X ∼ P_X[ (f̌(X) - f^*(X) )^2 ]
= 1/2{ (f̌(x) - f^*(x) )^2 + (f̌(x') - f^*(x') )^2}
= 1/2{ (c_2^* +1 )^2 + (c_2^* - 1)^2}
= 1 + (c_2^*)^2
≥ 1.
Hence, we show the statement of Proposition <ref>.
alpha
|
http://arxiv.org/abs/2307.06031v1 | 20230712092423 | On the Design of Nonlinear MPC and LPVMPC for Obstacle Avoidance in Autonomous Driving | [
"Maryam Nezami",
"Dimitrios S. Karachalios",
"Georg Schildbach",
"Hossam S. Abbas"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
From Vlasov-Poisson to Schrödinger-Poisson: dark matter simulation with a quantum variational time evolution algorithm
Ivano Tavernelli
August 12, 2023
======================================================================================================================
empty
empty
In this study, we are concerned with autonomous driving missions when a static obstacle blocks a given reference trajectory. To provide a realistic control design, we employ a model predictive control (MPC) utilizing nonlinear state-space dynamic models of a car with linear tire forces, allowing for optimal path planning and tracking to overtake the obstacle.
We provide solutions with two different methodologies. Firstly, we solve a nonlinear MPC (NMPC) problem with a nonlinear optimization framework, capable of considering the nonlinear constraints. Secondly, by introducing scheduling signals, we embed the nonlinear dynamics in a linear parameter varying (LPV) representation with adaptive linear constraints for realizing the nonlinear constraints associated with the obstacle. Consequently, an LPVMPC optimization problem can be solved efficiently as a quadratic programming (QP) that constitutes the main novelty of this work. We test the two methods for a challenging obstacle avoidance task and provide qualitative comparisons. The LPVMPC shows a significant reduction in terms of the computational burden at the expense of a slight loss of performance.
§ INTRODUCTION
In recent years, there has been a growing interest in developing autonomous driving vehicles. One of the key challenges in autonomous driving is navigating through complex environments and avoiding collisions with obstacles safely. Model predictive control (MPC) is a powerful control algorithm that has been widely used in the area of autonomous driving. MPC is particularly effective in controlling a vehicle because it can incorporate prior knowledge of the system dynamics, environmental information, as well as state and input constraints when computing a control input. Considering these factors, MPC can generate optimized control inputs that satisfy the constraints, resulting in high system performance and safety.
MPC has been widely applied for obstacle avoidance in autonomous vehicles, see, e.g., <cit.>. It can be utilized to generate optimal trajectories that steer the vehicle away from obstacles in its path while respecting safety constraints.
Given that vehicles are safety-critical systems, the use of nonlinear MPC (NMPC) is gaining popularity due to its ability to utilize high-fidelity nonlinear models of vehicle dynamics, thereby enabling more accurate and precise control actions.
The work presented in <cit.> has objectives similar to the current study, but it assumed constant longitudinal speed to solve the NMPC problem with sequential quadratic programs (SQP).
In <cit.>, an NMPC algorithm for path tracking has been proposed. This approach incorporates braking control before steering at high speeds. Investigating the effect of obstacle constraints on the algorithm's performance is interesting. This paper aims to keep the vehicle's operation stable while hard constraints are enforced in the optimization problem.
In <cit.>, a method for generating safe and efficient driving trajectories for autonomous vehicles using NMPC has been introduced. The numerical solution of the NMPC was obtained using a genetic algorithm strategy, which does not offer a guarantee of convergence.
The computational burden is a significant challenge for applying NMPC, particularly for systems with many states and constraints. Given the computational challenges, there has been increasing attention to linear parameter varying (LPV) modeling methods to embed nonlinear dynamics in a linear setting <cit.>. Although the application of LPVMPC in autonomous driving has not yet received much attention in the literature, there are promising results reported in recent studies. In <cit.>, a control architecture for lane-keeping has been suggested where a tube-based LPVMPC showed robust performance in lane-keeping. In <cit.>, an online planning solution based on LPVMPC for autonomous racing has been proposed to improve the computational time while preserving the system's performance.
Contributions:
This paper proposes a novel LPV embedding for the nonlinear vehicle dynamics as a first step toward LPVMPC implementation. Such an LPV embedding could be of interest for convergence and feasibility analysis based on convex optimization tools. As a second step, at first, computation of the kinematic trajectories from a fixed map is presented. Then, we propose a linear formulation of the nonlinear constraints associated with the obstacle, which allows a tractable LPVMPC optimization problem using quadratic programming (QP). The proposed LPVMPC scheme integrates path planning and control into one optimization problem, deciding when to initiate the overtaking maneuver while ensuring the vehicle to be within the road boundaries. Finally, to verify the effectiveness of the proposed methods, simulation results are compared to the full nonlinear implementation, and further discussions are given.
Contents: <ref> presents the nonlinear vehicle model and the nonlinear obstacle avoidance constraint, followed by the introduction of the linear representation of the obstacle constraint and the linear parameter varying modeling. In <ref>, the models and constraints from <ref> are used to set up the NMPC and the LPVMPC for obstacle avoidance. <ref> shows the implementation of the methods and compares their performance in the proposed obstacle avoidance scenarios. Finally, a few concluding remarks are presented in <ref>.
Notations and definitions: The notation Q ≻ 0 represents the positive definiteness of a matrix Q. The weighted norm x_Q is defined as x_Q^2 = x^⊤ Q x. The function diag(𝐱) constructs a diagonal matrix from a vector 𝐱. A halfspace is defined as { x ∈ℝ^n | a^⊤ x ≤ b}. The set of positive integers, including zero, is denoted by ℤ_+∪{ 0}.
§ VEHICLE MODEL AND CONSTRAINTS
Consider the following discrete-time nonlinear system
z_k+1 = f(z_k,u_k), ∀ k ∈ℤ_+∪{ 0},
where z_k ∈ℝ^n and u_k ∈ℝ^m are the state and input vectors, respectively, at the instant k. The initial condition is z_0. The system is subject to the following state and input constraints:
z_k ∈𝒵_k and u_k ∈𝒰 = { u_k ∈ℝ^m | G^u u_k ≤ h^u }.
Here, 𝒵_k represents a time-varying set, and its formulation will be discussed in the subsequent section. Within the input constraint, we have G^u ∈ℝ^q_u × m and h^u ∈ℝ^q_u. The number of rows, q_u, depends on the number of inputs that have upper bounds, lower bounds, both upper and lower bounds, or no bounds at all.
In this section, the representation of the vehicle dynamics in the form (<ref>) using a dynamic bicycle model <cit.> is given, as well as the constraints formulation to handle the obstacle avoidance.
§.§ Dynamic Bicycle Model
Based on <cit.>, the differential equations describing the motion at time t≥ 0 of a vehicle are presented as follows
Ẋ(t) = υ(t)cosψ(t)-ν(t)sinψ(t),
Ẏ(t) = υ(t)sinψ(t)+ν(t)cosψ(t),
υ̇(t) = ω(t) ν(t) + a(t),
ν̇(t) = -ω(t)υ(t) + 2/m( F_ yf(t) cosδ(t) + F_ yr(t)),
ψ̇(t) = ω,
ω̇(t) = 2/I_ z ( l_ f F_ yf(t) - l_ r F_ yr(t)),
where X, Y, υ, ν, ψ and ω denote the global X axis coordinate of the center of gravity (GoG), the global Y axis coordinate of the CoG, the longitudinal speed in body frame, the lateral speed in body frame, the vehicle yaw angle and the yaw angle rate, respectively. The control inputs of the system are the longitudinal acceleration a and the steering angle δ. The vehicle moment of inertia and mass are denoted by I_ z and m, respectively. The lateral forces acting on the front and rear tires are denoted as F_ yf and F_ yr, respectively, and calculated as F_yf = C_α fα_ f, F_yr = C_α rα_ r.
The parameters C_α f and C_α r represent the cornering stiffness of the front and rear tire, respectively. The slip angle of the front tire is α_ f and is calculated as α_ f = δ - (ν + l_ fω)/υ. The rear tire slip angle is α_ r and is calculated as α_ r = (l_ rω -ν )/υ. The parameters and variables are illustrated in <ref> and in <ref>.
To utilize the model in <ref> in an MPC framework, it is necessary to discretize the model. One of the commonly used methods for obtaining the corresponding discrete-time system is the forward Euler method[Forward Euler: ż(t_k)≈z(t_k+t_s)-z(t_k)/t_s, for t_k=t_sk, k=0,1,…]. Therefore,
the vehicle dynamics in <ref> can be written as in <ref>,
where z_k = [ X_k Y_k υ_k ν_k ψ_k ω_k ]^⊤, u_k = [ δ_k a_k ]^⊤ with the sampling time given in Table <ref>.
§.§ Constraints
To ensure the vehicle stays within the boundaries of the road, constraints are enforced on the (X_k,Y_k) coordinates of the vehicle. One approach <cit.>, involves computing the lateral error of the vehicle's center of gravity, e_k^ lat as follows
e_k^ lat = - sin(ψ_k^ ref ) ( X_k - X_k^ ref) + cos(ψ_k^ ref) (Y_k - Y_k^ ref ),
where X_k^ ref, Y_k^ ref and ψ_k^ ref are the longitudinal position, the lateral position, and the orientation, respectively, on a point on a given reference trajectory at step k. Then, the following constraint ensures that the vehicle's CoG always stays within the boundaries of the road
- R_1,k≤ e_k^ lat≤ R_2,k,
where R_1,k and R_2,k are the road widths on the right and left sides of the reference trajectory at step k.
The constraints associated with the road boundaries in <ref> can be computationally expensive due to their nonlinear nature. To address this challenge, an alternative linear constraint is proposed as follows
[ a_1,k b_1,k; -a_2,k -b_2,k ][ X_k; Y_k ]≤[ c_1,k; -c_2,k ],
where -a_1,k X_k - b_1,k Y_k ≥ -c_1,k and a_2,k X_k + b_2,k Y_k ≥ c_2,k are half-spaces defined by the tangent to the road boundary at step k, which ensure the (X_k,Y_k) coordinates at step k to remain within the two half-spaces. Imposing such linear constraints allows more efficient computations in the MPC optimization problem.
An efficient representation of an obstacle is to be formulated as an
ellipse. For simplicity, we consider here circular obstacles.
To impose the obstacle constraints in the NMPC, one possible approach is to calculate the Euclidean distance between the (X_k,Y_k) coordinates and the center of the obstacle and to ensure that the vehicle's (X_k,Y_k) coordinates always remain outside the obstacle, as described below
( X_obs - X_k)^2 + (Y_obs - Y_k)^2 ≥ r^2,
where (X_obs, Y_obs) indicates the center of the obstacle and r represents its radius.
However, it is usually desired to formulate linear constraints for the obstacle in order to reduce the computational complexity.
For this purpose, we propose to replace the nonlinear constraint in <ref> with a linear inequality constraint, see <ref> below,
which varies over the MPC prediction horizon according to the tangent to the circular obstacle boundary. Every linear inequality constraint represents a half-space, which defines a safe region to avoid collision with the obstacle.
An illustration
is depicted in Fig. <ref>. If a reference point falls inside the obstacle within the MPC horizon, a tangent defining a linear inequality constraint, such as h_1 or h_2, is calculated at the intersection point (the red dots in Fig. <ref>). The corresponding half-space includes the safe region to avoid the obstacle.
The (X_k,Y_k) coordinate of the vehicle should then be on that side, which can be defined by the linear inequality
h_1 : a_3,k X_k + b_3,k Y_k ≥ c_3,k,
where a_3,k, b_3,k and c_3,k are the parameters of the tangent half-space to the obstacle.
§ CONTROLLER DESIGN
§.§ The Kinematic Trajectories From a Fixed Map
To consider a realistic setup for the problem, we assume that only the (X_i^ ref, Y_i^ ref) values of the reference trajectory are available. However, we should compute the corresponding reference values for the remaining four states, υ_k^ ref, ν_k^ ref, ψ_k^ ref and ω_k^ ref, to track the reference trajectory effectively.
For the computation of ψ_k^ ref, the global (X_i^ ref, Y_i^ ref) can be directly used as follows
ψ_k^ ref = arctan(Y_k-1^ ref - Y_k^ ref/X_k-1^ ref - X_k^ ref).
Next, ω_k^ ref can be calculated as ω_k^ ref = (ψ_k^ ref - ψ_k-1^ ref)/t_s, where ψ_k-1^ ref is the reference yaw angle which was computed in the previous step by using <ref>. To calculate υ_k^ ref and ν_k^ ref, which represent the longitudinal and lateral speeds in the body frame, the reference points in the body frame are
determined as follows
[ x_k^ ref; y_k^ ref ]=[ cos(ψ_k^ ref ) sin(ψ_k^ ref); - sin(ψ_k^ ref ) cos(ψ_k^ ref ) ][ X_k^ ref - X_k-1^ ref; Y_k^ ref - Y_k-1^ ref ].
Then,
the reference speeds
can be readily computed as υ_k^ ref = (x_k^ ref - x_k-1^ ref)/t_s and ν_k^ ref = (y_k^ ref-y_k-1^ ref)/t_s, where x_i^ ref and y_i^ ref are computed in <ref>.
§.§ Nonlinear MPC
The constrained nonlinear optimal control for reference tracking w.r.t the decision variable
U = {
u_0|k, u_1|k, … , u_N-1|k}
is formulated as follows.
[Nonlinear optimization problem]
Umin z_N|k - z^ref_N|k^2_P + ∑_i=0^N-1 z_i|k - z^ref_i|k^2_Q + u_i|k^2_R
s.t. z_i+1|k= z_i|k+ t_s f(z_i|k,u_i|k), ∀ i = 0,…,N-1,
z_0|k = z_k,
z_i|k∈𝒵_i|k, ∀ i = 0,1,…,N,
u_i|k∈𝒰, ∀ i = 0,1,…,N-1,
where z^ref_i|k = [ X_i|k^ ref Y_i|k^ ref υ_i|k^ ref ν_i|k^ ref ψ_i|k^ ref ω_i|k^ ref ]^⊤ is the reference value for the states at each step, which is computed by <ref>.
The tuning matrices are Q ≽ 0 ∈ℝ^6 × 6, R ≻ 0 ∈ℝ^2 × 2 and P ≽ 0 ∈ℝ^6 × 6. The MPC prediction horizon is denoted with N.
In the above optimization problem, f is the nonlinear dynamic bicycle model in <ref>, and z_0|k is the system's initial condition. Here state constraint, 𝒵_i|k, includes the bounds on each state, the road boundary constraint (<ref>), and the obstacle avoidance constraint (<ref>). The input constraint 𝒰 is defined in <ref>.
§.§ Linear Parameter Varying MPC
By introducing the scheduling signals υ(t), ν(t), δ(t), and ψ(t), that form the scheduling variable vector p(t)=(υ(t), ν(t), δ(t), ψ(t)); the continuous-time nonlinear dynamics in <ref> can be written equivalently
in the LPV representation as
{ż(t) =A_c(p(t))z(t)+B_c(p(t))u(t),
p(t) =(υ(t),ν(t),δ(t),ψ(t)), t≥ 0.
.
The state vector z(t) of dimension 6 can be defined as z(t):=[[ X(t) Y(t) υ(t) ν(t) ψ(t) ω(t) ]]^⊤ with initial conditions z_0 and the continuous-time system matrices A_c∈^6× 6, B_c∈^2× 2 as
A_c(p(t)) :=[[ 0 0 cos(ψ(t)) -sin(ψ(t)) 0 0; 0 0 sin(ψ(t)) +cos(ψ(t)) 0 0; 0 0 0 0 0 ν(t); 0 0 0 a_44(t) 0 a_46(t); 0 0 0 0 0 1; 0 0 0 a_64(t) 0 a_66(t) ]],
β_f:= 2Caf/m,β_r:=2Car/m, γ_f:=2ℓ_fC_af/I_z,γ_r:=2ℓ_rC_ar/I_z,
a_44(t) :=-β_fcos(δ(t))1/υ(t)-β_r1/υ(t),
a_46(t) :=-υ(t)-β_fcos(δ(t))1/υ(t)ℓ_f+β_r1/υ(t)ℓ_r,
a_64(t) :=1/υ(t)(γ_r-γ_f), a_66(t):=-1/υ(t)(γ_fℓ_f+γ_rℓ_r),
and
B_c(p(t)):= [ 0 0 0 β_fcos(δ(t)) 0 γ_f; 0 0 1 0 0 0 ] ^⊤.
The discretization with the Euler method and a sampling time t_s results in the discrete-time LPV representation of <ref> as
{ z_k+1 =A(p_k)z_k+B(p_k)u_k,
p_k =(υ_k,ν_k,δ_k,ψ_k), k∈ℤ_+∪{ 0}.
where A(p_k)=I+t_s A_c(p_k), B(p_k)=t_s B_c(p_k) are the discrete-time LPV system matrices, and I∈^6× 6 is the identity matrix.
QP optimization as (p_i|k,z_k,z_i|k^ref)
Umin z_N|k - z^ref_N|k^2_P + ∑_i=0^N-1 z_i|k - z^ref_i|k^2_Q + u_i|k^2_R
s.t. z_i+1|k=A(p_i|k)z_i|k+B(p_i|k)u_i|k, i=0,…,N-1
z_0|k = z_k,
z_i|k∈𝒵̅_i|k, ∀ i = 0,1,…,N,
u_i|k∈𝒰, ∀ i = 0,1,…,N-1,
where, the reference trajectory z^ref_i|k, the tuning matrices P, Q and R, as well as the decision variable U are as defined for the Problem <ref>. The initial condition is z_0|k, and N denotes the MPC prediction horizon. The LPV model in <ref> is defined in <ref>. The input constraint 𝒰 is given in <ref>. The state constraint 𝒵̅_i|k in <ref> includes the bounds on the states, the road boundary constraint in <ref> and the linear obstacle avoidance constraint in <ref>. Therefore, the state constraint can be represented in the polytopic form 𝒵̅_k = { z_k ∈ℝ^6 | G^z_k z_k ≤ h^z_k }. The steps implementing the above LPVMPC are given in <ref>. A similar algorithm has been proposed for the quasi LPV case in <cit.>.
§ RESULTS AND DISCUSSIONS
This section implements and compares the performance of the two MPCs, the NMPC and the LPVMPC. The simulations are performed on a Dell Latitude 5590 laptop with an Intel(R) Core(TM) i7-8650U CPU and 16 GB of RAM. The scenarios are implemented in Matlab <cit.>, utilizing the YALMIP toolbox <cit.>, with an optimality tolerance of 10^-4. To solve the nonlinear optimization problem, we employ the
IPOPT solver <cit.>. The Matlab quadprog solver is used for solving the quadratic optimization problem.
The simulation scenario is to drive the vehicle in the middle on the right-hand side of a road to follow a reference trajectory using one of the proposed controllers in the previous sections. Then, an obstacle appears in the road, and the vehicle is controlled to overtake this obstacle safely and to return back to the reference trajectory in the middle on the right-hand side of the road. <ref> presents the parameters utilized in the MPCs, along with the upper and lower limits of the states and input variables.
The reference trajectory is picked as a sine wave to mimic the road, and the (X_i^ ref, Y_i^ ref) on the reference trajectory are intentionally selected to be non-equidistant in space. As a result, the vehicle's speed shall be adjusted based on the distance between successive (X_i^ ref, Y_i^ ref) points. The initial condition of the vehicle is z_0 = [ 0 0 10 0 0 0 ]^⊤. Furthermore, the road on the left-hand side of the vehicle's reference trajectory is assumed to be always 4 m wide, while on the right-hand side, it is always only 1 m wide, i.e., R_1,k = 1 m and R_2,k = 4 m, ∀ k = 0, 1, … in <ref>. Also, for both MPCs, N = 8, R = diag([ 0.1 0.1 ]), Q = diag([ 10 10 1 1 1 1 ]) and P=Q. To keep the vehicle movements smoother, we constrain the rate of change of δ_k and a_k, i.e., |δ_k - δ_k-1 | ≤ 40π / 180 rad, | a_k - a_k-1 | ≤ 1.5 m / s^2.
To evaluate the effectiveness of our approach, we compare the performance of the NMPC and the LPVMPC in two problem setups: a reference tracking (RT) problem and an obstacle avoidance problem.
§.§ Reference Tracking (RT)
In <ref>, a comparison of the application of the NMPC and the LPVMPC for reference tracking is demonstrated. In this figure, the blue line represents the position of the vehicle when controlled by the NMPC, and the red line is the position of the vehicle controlled by the LPVMPC.
As illustrated in <ref>, the performance of the LPVMPC and the NMPC regarding tracking error is almost identical. By comparing the inputs generated by the controllers in <ref>, it is clear that the steering angles produced by the NMPC and LPVMPC are nearly identical. Similarly, the accelerations are quite similar, although the NMPC acceleration appears to be smoother. <ref> displays the computation time for solving an optimization problem to generate the inputs for the NMPC and the LPVMPC at each time instant k. The results confirm the reduction in the computation time by using the LPVMPC.
§.§ Obstacle Avoidance
In this subsection, the results of the comparison between the NMPC and the LPVMPC in an obstacle avoidance scenario are presented. The obstacle is assumed to be in a circular shape with a radius of 1 m at X_obs = 29.4819 m, Y_obs = 17.4753 m. This means that the obstacle has blocked one side of the road in the studied scenario.
The result of applying each of these MPCs to the nonlinear vehicle dynamics <ref> is illustrated in <ref>. In this figure, the blue line represents the vehicle's trajectory when controlled by the NMPC, while the red line indicates its trajectory when controlled by the LPVMPC. As <ref> indicates, both the NMPC and LPVMPC are capable of controlling the vehicle to follow the desired reference trajectory and initiate the overtaking maneuver at an appropriate moment. However, the NMPC is performing a smoother maneuver.
In <ref>, the steering angles and the accelerations generated by each controller are presented.
Based on the information in these figures, the NMPC can generate smoother control inputs. The difference in the inputs justifies the smoother movement of the vehicle in <ref>.
The computation time of both the NMPC and the LPVMPC are presented in <ref>. As the results confirm, using the LPVMPC can reduce the computation time significantly.
§ CONCLUSIONS
This paper proposed a novel LPV embedding for modeling the nonlinear dynamics of a vehicle with linear tire forces. The proposed approach aims to simplify the implementation and offers a good alternative to NMPC, which is commonly considered. We introduced a linear formulation for obstacle avoidance constraints, enabling the proposed LPVMPC scheme to integrate both path planning and control into a single optimization problem. The LPVMPC is comparable to the NMPC in terms of performance with a more efficient computational burden.
Finally, better tuning, model generalizations with the LPV embedding, dynamical and more challenging obstacles avoidance scenarios,
as well as considering theoretical analysis such as stability and recursive feasibility guarantees
are left for future research endeavors as the analysis with the LPV formulation can be carried out efficiently within the well-defined linear systems framework.
|
http://arxiv.org/abs/2307.05381v1 | 20230710143232 | Reliable Devices Yield Stable Quantum Computations | [
"Samudra Dasgupta",
"Travis S. Humble"
] | quant-ph | [
"quant-ph"
] |
Reliable Devices Yield Stable Quantum Computations
The manuscript is authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan: https://www.energy.gov/doe-public-access-plan.
Samudra Dasgupta^1, 2^*, and Travis S. Humble^1,2^†
^1Quantum Science Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA
^2Bredesen Center, University of Tennessee, Knoxville, Tennessee, USA
^*[email protected], ORCID: 0000-0002-7831-745X
^†[email protected], ORCID: 0000-0002-9449-0498
February 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Stable quantum computation requires noisy results to remain bounded even in the presence of noise fluctuations. Yet
non-stationary noise processes lead to drift in the varying characteristics of a quantum device that can greatly influence the circuit outcomes.
Here we address how temporal and spatial variations in noise relate device reliability to quantum computing stability. First, our approach quantifies the differences in statistical distributions of characterization metrics collected at different times and locations using Hellinger distance. We then validate an analytical bound that relates this distance directly to the stability of a computed expectation value. Our demonstration uses numerical simulations with models informed by the transmon device from IBM called washington. We find that the stability metric is consistently bounded from above by the corresponding Hellinger distance, which can be cast as a specified tolerance level. These results underscore the significance of reliable quantum computing devices and the impact for stable quantum computation.
device reliability,
program stability,
spatio-temporal non-stationarity,
time-varying quantum noise
§ INTRODUCTION
Quantum devices are subject to non-stationary noise sources, e.g. non-uniform spontaneous decay, energy loss, cross-talk, sensitivity to imprecise control pulses, and fluctuations in thermodynamic controls, all of which affect the quality of the quantum register implementation. The field of quantum noise characterization focuses on measuring and tracking noise metrics (such as CNOT gate error) at various points in time. These characterizations inform calibration techniques for hardware engineers as well as error mitigation methods for programmers.
However, quantum devices also exhibit temporal variations in their noise sources, which underlies the need for frequent calibration and adjustment of device metrics. Non-stationary noise processes can also stymie attempts at characterization as the underlying noise models must adapt to new and often unpredictable behaviors. How can we monitor changes in the noise itself to better inform these efforts?
Here, we address the concern that non-stationary noise process pose to reliable quantum computation. Device reliability is presented as a measure of the statistical similarity of the underlying device metrics, such as gate fidelities and coherence times. This measure captures the similarity between device metrics considering both spatial and time-varying noise processes. We then recall how device reliability bounds the stability of expectation values dervied from noisy quantum computation. Moreover, we validate this bound on stablility using numerical simulations of a circuit modeled by a multi-dimensional correlated noise distribution.
§ THEORY
§.§ Stability
Stability in quantum computing refers to the boundedness of the output of a quantum circuit in the presence of noise fluctuations <cit.>. In this study, we focus on the mean value of a quantum observable O as a representative of program output. Let represent the parameter characterizing the noisy realization of a quantum circuit 𝒞. The mean value of O obtained from repeated executions on the noisy circuit is denoted as ⟨O|_⟩. Considering the time-varying nature of device noise, we introduce f(; t) as the probability distribution function of the quantum noise parameter . We define ⟨O|_⟩t as the average value of ⟨O|_⟩ with respect to f(; t), the probability distribution function for the noise parameter.
⟨O|_⟩t = ∫⟨O_|f⟩(; t) d
The stability of the quantum observable between two time points, t_1 and t_2, is then quantified by the absolute difference in the mean values of ⟨O|$⟩ obtained at those times, defined as
s(t_1, t_2) = | ⟨O|_⟩t_1 - ⟨O|_⟩t_2 |
§.§ Reliability
We next quantify device reliability by comparing the statistical distributions of various characterization metrics collected at different times and register locations. When these metrics exhibit statistical similarity, the device behavior is considered to be reliable. The statistical distance between distributions is calculated using the Hellinger distance, which offers ease of calculation and interpretation.
H( f( ; t_1), f( ; t_2) ) = √(1-∫_√(f( ; t_1) f( ; t_2)) d)
The Hellinger distance above quantifies the statistical similarity of a device at different times such that when the distance is small, the device behaves statistically similar at both times. This is expected when the underlying noise process is stochastic. However, larger values of the distance imply that noise processes within the device are non-stationary processes that lead to noticeable changes in device properties. The timescales on which such statistically significant changes are measured represent an important metric for evaluating the reliability of the device relative to a desired tolerance.
§.§ Stability Bounds
We now establish an analytical and intuitive connection between output stability and device reliability. Specifically, we show how device reliability constrains the outcomes from a quantum program executed on a NISQ device by examining the role of fluctuations in device metrics.
Lets_toldenote a specified tolerance on the stability metric introduced earlier. Additionally, let the reliability of the quantum device between timest_1andt_2is quantified by the Hellinger distanceH_X, as discussed previously. We determine the maximum boundH_X^max(t_1, t_2)on the Hellinger distance constrained bys(t_1, t_2) < s_tol.
We begin by noting the bound on the stability satisfies
s^2(t_1, t_2) ≤( ∫| ⟨O_|{⟩ f(; t_1)dx - f(; t_2)}| dx)^2
where the inequality stems from the absolute value on the integrand. Per Holder's inequality, ifm, n ∈[1, ∞]and1m+1n=1then
∫| f(x) g(x) |dx ≤( ∫ |f(x)|^m dx)^1/m( ∫ |g(x)|^n dx)^1/n
Thus, the right-hand side of Eq. (<ref>) becomes
( ∫| ⟨O_|{⟩ f(; t_1)dx - f(; t_2)}| dx)^2
≤( ∫ | ⟨O_||⟩^m dx)^2/m( ∫ | f(; t_1) - f(; t_2) |^n dx)^2/n
Choosem =∞, n=1and definec= sup |⟨O_ ||⟩.
This circuit-specific constant satisfies
lim_m →∞( ∫ |⟨O_||⟩^mdx)^1/m≤lim_m →∞( ∫ c^mdx)^1/m= c
Thus, we may then reduce Eq. (<ref>) as
s(t_1, t_2)^2 ≤ lim_m→∞, n = 1( ∫ | ⟨O_||⟩^m dx)^2/m
( ∫ | { f(; t_1)dx - f(; t_2)} |^n dx)^2/n
≤ c^2 ( ∫| √(f(; t_1)) - √(f(; t_2))| .
. ( √(f(; t_1)) + √(f(; t_2))) dx)^2
≤ c^2 ∫( √(f(; t_1)) - √(f(; t_2)))^2 dx
∫( √(f(; t_1)) + √(f(; t_2)))^2 dx
Using Holder's inequality withm=n=2, this yieldss(t_1, t_2) = 2cH√(2-H^2)which can be re-arranged to yield the maximum
H_max(t_1, t_2) = √(1-√(1-ϕ))
withϕ= s_tol^2 / (4c^2). This sets an upper limitH_maxon the Hellinger distance to ensure the desired stability criterions_tolis met.
§ VALIDATION
§.§ Experimental data
We utilized data obtained from the transmon device called washington, a 127-qubit register with heavy hexagonal connectivity developed by IBM. The publicly available characterization data for the washington device was used to create a dataset comprising specific device metrics (refer to Table 1). This dataset was constructed from a subset of the device characterization data spanning a 16-month period from January 1, 2022, to April 30, 2023. The Qiskit software library <cit.> was employed to access the collected characterization data online.
These metrics correspond to the minimum requirements for the physical implementation of quantum computing <cit.>, which fall into one of the five classes: SPAM (state preparation and measurement) fidelity, single-qubit gate fidelity, two-qubit entangling gate fidelity, duty cycle (gate length to coherence time ratio), and addressability (ability to measure a register element without interference from other qubits). Specifically, these 16 metrics capture the noise processes of the five-qubits employed in the test circuit illustrated in Fig. <ref>. Our simulation of the test circuit (described in the next section) relies on data pertaining to these 16 metrics, which enables us to estimate the time-varying joint distribution of circuit noise. Utilizing this estimated distribution, Monte Carlo sampling is performed to simulate the test circuit and validate the theory presented earlier.
§.§ Test circuit
We validate the stability bound using a numerical simulation of of the Bernstein-Vazirani circuit <cit.> with a noise model using the characterization data presented in the previous section. The Bernstein-Vazirani algorithm determines a secretn-bit stringrencoded in an oracle. Our focus is on assessing the success probability of correctly computing the secret bit string using the fewest number of queries possible. In contrast to the classical approach that requiresnqueries, the Bernstein-Vazirani algorithm achieves the same outcome with just one query. Fig. <ref> illustrates the quantum circuit corresponding to a 4-bit secret key. The observable for the problem isO = Π_r = |r⟩⟨r|where|r⟩ = ⊗_i=1^n |r_i⟩withr_i ∈{0,1}.
§.§ Method
We used numerical simulations to test the reliability of a model noisy quantum device and to investigate the boundedness of the stability metric as predicted by the theory above. We first mapped the 16 noise parameters necessary for simulating the 5-qubit Bernstein-Vazirani circuit shown in Fig. <ref> to distinct independent noise processes, selecting them based on the criteria outlined in <cit.> for the physical implementation of quantum computing.
The parameters mapped to gate and register specific noise model in Table <ref>. For example, the asymmetric binary channel for register0flips the measured output bitb_0tob_0⊕1with probabilityx_0, while the coherent phase error channel for the Hadamard gateHapplied to register 0 transforms the underlying quantum state asCP(H ρH) = R_z(θ) H ρH R_z^†(θ).
Thermal relaxation is modeled by an exponential dephasing process that depends on theT_2time and the duration of the underlying gate not shown here.
While the 16 noise processes above act independently, the underlying noise parameters are assumed to be correlated. We construct a joint distribution of to describe these parameters using the method of Gaussian copula, cf. Fig. <ref>. The copula itself is defined as
Θ(y) = exp( -1/2(y-μ)^T Σ^-1 (y-μ))/(2π)^n/2|Σ|^1/2
where the correlation elementsΣ_i,jare derived from Pearson correlation coefficients computed using the daily metric values available from the washington data set. For a Gaussian copula, the corresponding 16-dimensional noise distribution takes the form
f_X(;t) = Θ[ F_X_1(_1; t), ⋯ F_X_d(_d; t) ]
∏_j=1^n f_X_j(_j; t)
whereF_X(; t)is the cumulative distribution function at timetandΘ(·)is the copula function. These generated distributions are then used to calculate the Hellinger distance in Eq. <ref>.
Our numerical studies of the quantum circuit stability generates an ensemble of noisy simulations by drawing samples from the multi-parameter noise distribution represented by Eq. <ref>.
We initially establish a joint distribution from the daily data gathered in January 2022 for the washington device, utilizing copulas. Over the next 15 months, we introduce minor perturbations to this distribution, ensuring that the Hellinger distance never exceedsH_maxbetween the perturbed and original January 2022 distributions. In this perturbation scheme, the marginal distribution of the CNOT gate error between qubits 1 and 2 is modeled using a beta distribution, which is based on the aforementioned January 2022 daily data. Small, random perturbations to the beta distribution parameters are incorporated over 15 months for the CNOT error, with Hellinger distance constraint maintained. For each perturbed distribution, we generate 100,000 noise metric samples, and execute 100 qiskit simulations (each with 8192 shots). The stability metric is then computed from the obtained output, as per Eqn. <ref>.
§.§ Results
Figure <ref> presents the simulation results illustrating the relationship between the stability metric (s) and the reliability of a quantum device characterized by the Hellinger distance (H). The results demonstrate that whenH≤H_maxthe device is reliable such that the temporal difference of the observable (s) remains within the specified upper bound (s≤s_max).
In our simulations, we set the tolerance thresholds_tol = 20%, which limits the maximum acceptable deviation in the expectation value over time. According to Eqn. <ref>, this results in an upper limit of 7.1% for the device reliability metricH_max. The lower panel of Fig. <ref> presents the calculated Hellinger distance between the multi-dimensional noise processes characterizing the device. These calculations show how noise fluctuates on a monthly basis while still respecting theH_maxconstraint. While time varying, these process emulate the behavior of a reliable device. The upper panel of Fig. <ref> presents the corresponding stability metric, which never exceeds the 20% tolerance. Moreover, we find the stability is nearly two orders of magnitude smaller than the expected tolerance, with an average of approximately 0.6%. By selecting a reliable device whose temporal noise variation remains within the defined bounds, we can ensure the stability of program output within the desired tolerance.
§ CONCLUSIONS
Output stability is crucial in quantum computing research as non-stationary noise processes in quantum devices can result in unstable results that fluctuate based on time-varying device noise characteristics, rendering them unsuitable for meaningful interpretation and drawing scientific conclusions. The variations in superconducting qubits, attributed to certain oxides on the superconductors' surface modeled as fluctuating two-level systems (TLS) <cit.>, have been extensively studied. Ongoing research focuses on addressing the time-varying nature of quantum noise through modeling <cit.>, characterizing noise sources, tracking their temporal profile <cit.>,
and integrating statistical knowledge into quantum architectures using innovative techniques <cit.>. This paper explores the relationship between device reliability and output stability by considering a user-defined upper bound on variations in expectation values. The goal is to assess the stability of program outputs by evaluating the reliability metric within a specified tolerance bound through simulations.
§ ACKNOWLEDGMENTS
This work is supported by the U. S. Department of Energy (DOE), Office of Science, Early Career Research Program. This research used computing resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. unsrt |
http://arxiv.org/abs/2307.04910v1 | 20230710212826 | Self-Diagnosis and Large Language Models: A New Front for Medical Misinformation | [
"Francois Barnard",
"Marlize Van Sittert",
"Sirisha Rambhatla"
] | cs.CY | [
"cs.CY"
] |
CLASSY VII Profiles: The Structure and Kinematics of Neutral Gas and Implications for
LyC Escape in Reionization-Era Analogs[
Based on observations made with the NASA/ESA Hubble Space Telescope,
obtained from the Data Archive at the Space Telescope Science Institute, which
is operated by the Association of Universities for Research in Astronomy, Inc.,
under NASA contract NAS 5-26555.]
Cody Carr
=====================================================================================================================================================================================================================================================================================================================================================================================================
Improving healthcare quality and access remains a critical concern for countries worldwide. Consequently, the rise of large language models (LLMs) has erupted a wealth of discussion around healthcare applications among researchers and consumers alike. While the ability of these models to pass medical exams has been used to argue in favour of their use in medical training and diagnosis, the impact of their inevitable use as a self-diagnostic tool and their role in spreading healthcare misinformation has not been evaluated. In this work, we critically evaluate LLMs' capabilities from the lens of a general user self-diagnosing, as well as the means through which LLMs may aid in the spread of medical misinformation. To accomplish this, we develop a testing methodology which can be used to evaluate responses to open-ended questions mimicking real-world use cases. In doing so, we reveal that a) these models perform worse than previously known, and b) they exhibit peculiar behaviours, including overconfidence when stating incorrect recommendations, which increases the risk of spreading medical misinformation.
§ INTRODUCTION
Large language models (LLMs) have grown in popularity with an ever-expanding list of applications. Due to recent publicity, many have flocked toward ChatGPT as a result of its perceived human-like capabilities and accessibility <cit.>. Unfortunately, a symptom of this exaggeration of ChatGPT's abilities is a misplaced level of user trust <cit.>. As self-diagnosis via web searches is already a common practice <cit.>, applying intelligent systems to this domain is inevitable. Likewise, as we see global health worker shortages <cit.>, government entities and healthcare bodies may be tempted to employ LLMs as healthcare assistants or expertise replacements <cit.>.
Recent methods which evaluate the performance on medical exams ask a LLM (in this case ChatGPT) to pick answers from a list to conclude that ChatGPT passes the official or variations of the United States Medical Licensing Exam <cit.> with scores near the passing threshold of 60% <cit.>. This creates an inaccurate portrayal of LLMs' capabilities. Further <cit.> in Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models use ChatGPT's performance on medical licensing exams to argue for its use in medical student training. In this work, we set to evaluate the performance of ChatGPT in a more realistic scenario and evaluate its robustness. Additionally, we investigate the means through which medical misinformation could propagate when ChatGPT is used for self-diagnosis. We do this by first assessing ChatGPT's response to open-ended medical questions based on a modified subset of single-answer questions taken from the United States Medical Licensing Examination (USMLE) <cit.>. As a means of evaluating the robustness of ChatGPT, the aggregated data is then used to perform a sensitivity analysis.
Further, to simulate the experience of a general user, we ask two non-expert assessors who were, through detailed guidelines and procedures, instructed to categorize ChatGPT's answers as “Correct”, “Partially Correct”, “Incorrect”, or “Ambiguous”. Through this analysis, we find that a) the performance drops when the options are not provided to the LLM, b) LLMs fail to indicate uncertainty or provide disclaimers in their answers when answering medical questions, and c) an LLM lacks awareness and understanding to validate information even when presented with its own responses.
§.§ Generalizable Insights about Machine Learning in the Context of Healthcare
This paper investigates how LLMs, specifically ChatGPT, inject medical misinformation when used for self-diagnosis. The generalizable insights from our work are as follows:
* Our key motivation is that asking LLMs (ChatGPT in this case) to select an option from a list does not reflect its ability to assist in any meaningful real-world task in healthcare. To mimic LLM use in the real world, we consider the case of self-diagnosis by a general user and perform response quality evaluation a) without providing specific options and b) information dropout for sensitivity analysis.
* We ask human assessors to rate response quality on a more granular scale to analyze response quality and explore its impact on contributing to a new form of medical misinformation. Particularly, we observe that LLMs, specifically ChatGPT, neglect any disclaimers or indication of uncertainty within its responses, regardless of correctness.
* Moreover, we ask ChatGPT to evaluate its own answers. It demonstrates a lack of confidence in its responses, meaning that LLMs may also not be good tools for double-checking information.
* We develop a repeatable approach which can be used for testing the capabilities of LLMs for healthcare diagnostic purposes on other datasets. Particularly, it allows for evaluations where coder reliability without consensus is desired.
Our intended audience is everyone. For clinicians, we present a novel approach for non-expert assessment for medical applications of LLMs. For machine learning researchers, we provide evidence for a need for further efforts in explainability and interpretability for LLMs – especially in medical applications – and to develop fact-checkers. For other individuals, we provide insight into the risks and shortcomings of using ChatGPT for self-diagnosis. Overall, our evaluation methodology provides a blueprint to evaluate responses to open-ended questions to ground future work in healthcare and beyond.
Please note, although the information may apply to other LLMs, for the remainder of this work, we will focus on and refer only to the LLM, ChatGPT.
§ RELATED WORK
§.§ ChatGPT
ChatGPT is a natural language processing model distinctive for its narrative response style to user input <cit.>. Millions have interacted with the platform since its public release in November 2022 <cit.>, and there has been a surge of academic studies focused on the various applications of ChatGPT <cit.>. Commonly cited are papers focusing on the accessibility and adaptation of ChatGPT to education and learning enterprises. A subsection of these studies has explored how ChatGPT performs on an array of examinations <cit.>. Papers investigating the utility of ChatGPT have also highlighted its potential as a self-studying tool, leveraging its ability to provide tailored responses and immediate feedback based on student needs <cit.>. Further, ChatGPT has shown promise in its ability to assist in research and academic writing by improving efficiency and compensating for weaknesses in researcher knowledge <cit.>. However, these potential use cases have raised significant ethical concerns regarding plagiarism, bias, transparency, and inaccuracy <cit.>, among others.
§.§ Large Language Models in Healthcare
Healthcare has been a significant area of focus for research into the utility of LLMs, and of the LLMs most often studied, ChatGPT has generally demonstrated superior performance <cit.>. Various applications of ChatGPT in healthcare education have been investigated, including its performance on licensing examinations, enhancement of personalized learning, and assistance with clinical reasoning and complex medical concepts for medical students <cit.>. Other areas of study involving ChatGPT in the healthcare setting have tackled inefficiencies and inaccuracies in clinical workflow, medical research, and diagnosis. Recent papers have cited numerous applications of LLMs in clinical practice through streamlining processes of documentation, triage, and clinical data management <cit.>. The potential for LLM diagnostic assistance has also been investigated, incorporating patient questionnaires and medical imaging <cit.>.
Notably, ChatGPT’s performance when offering personalized medical advice has been evaluated with regard to its accuracy, comprehensiveness, patient readability, and the inclusion of humanistic care <cit.>. It has been suggested that ChatGPT can simplify medical jargon and, thus, improve both the patient-doctor relationship and accessibility to complex research <cit.>. Notably, most studies relating to ChatGPT applications in healthcare are conducted in a “laboratory environment,” not in clinical settings, which may affect their applicability to the real-world context <cit.>. Similar ethical dilemmas to those mentioned in Section 2.1 are relevant to LLMs in the healthcare domain.
§.§ Health Misinformation in the Digital Age
Though healthcare is a hyper-specialized field, individuals have long sought health-related information outside the formal healthcare system <cit.>. A plethora of literature exists concerning the risks of health information-seeking behaviour <cit.>. Nevertheless, the internet has become one of the most popular resources by which individuals attempt to learn about health and investigate personal health conditions <cit.>. Certain sources of online health information, including ChatGPT, may be new, but health misinformation remains a long-standing issue, simply adopting novel forms.
Congruent with the rise of LLMs has appeared what <cit.> term an “AI-driven infodemic,” defined as a public health threat materializing from the application of LLMs for the production of misinformative contents, scientific articles, and fake news. Researchers are not only concerned with the possibility of LLMs quickly writing large amounts of human-like texts with malicious intent but also with the more subtle propagation of information lacking scientific ground of support <cit.>. Such occurrences could produce vast amounts of low-quality scientific information and literature in the health field. Moreover, considering user interaction, <cit.> found that knowledge provided by the user in their prompts affects ChatGPT’s answers by occasionally overturning the knowledge encoded in its model. This phenomenon can occur to the detriment of answer correctness, demonstrating how the contents and phrasing of user input hold weight in the accuracy of ChatGPT’s health-related answers <cit.>.
§ METHODS
In this section, we first describe the considerations and set up for the decided dataset. Then, we describe assessor procedures and guidelines and how the output answers from ChatGPT were processed. Finally, we introduce our prompt generation and testing methodology using ChatGPT. Our approach utilizes non-medical experts for assessment as non-medical experts would better represent the general populous and their interpretation of digital healthcare information. The overall process is segmented into two experiments:
* Baseline ChatGPT Answer Analysis
* ChatGPT Robustness and Ablation Study Analysis
§.§ Dataset Considerations and Setup
To best replicate a realistic scenario, a representative dataset was required. We focus on <cit.> data for our experiments since the USMLE test is used for medical licensing examinations in the United States. The data used in this work was extracted from the work by <cit.>.
The USMLE consists of three “Steps.” USMLE Step 1, Step 2, and Step 3 are typically taken by medical students between their second and third year of study, after their fourth year of study, and after their first year of residency, respectively <cit.>. Each of these three tests consists of multiple-choice, single-answer, no-justification questions. We focused only on USMLE Step 1, as a regular user trying to self-diagnose would not express themselves in the manner presented in USMLE Step 2 and Step 3 and would not be aware of the more complex medical conditions unless they are an expert. Additionally, we only considered questions that could be presented in a textual manner, which excluded questions with accompanying images, plots, or any form of visual media. Therefore, the resulting dataset contained 94 single-answer questions that would be used to generate prompts for ChatGPT. Figure <ref> displays a sample of the 94 questions used for prompt generation.
§.§ Assessor Procedures and Guidelines
To evaluate ChatGPT's responses, we relied on non-medical-expert assessments as our goal was to, as closely as possible, simulate self-diagnosis by non-expert individuals. Non-experts are required to simulate self-diagnosis, without discussion or consensus among assessors, as a more realistic reflection of real-world cases and to reveal the potential for misinformation. Although the USMLE Step 1 test contains expert material, non-experts frequently encounter expert material when self-diagnosing. Therein, non-expert assessment was required for both experiments introduced in Section <ref>. Further details about these experiments are discussed in subsection <ref>.
Each assessor was provided with written procedures to guide how they categorized the data. These guidelines are presented in Appendix <ref>. The assessors were instructed to categorize the answers as “Correct” (C), “Partially Correct” (PC), “Incorrect” (I), or “Ambiguous” (A). Specific instructions and examples were provided for each category to guide the assessor's understanding and methodology. See Appendix <ref> for more details and examples.
Assessors were also instructed to utilize web searches to understand the topic matter to the best of their ability, as individuals self-diagnosing would use similar methods for research. However, unlike the typical individual self-diagnosing, the guidelines limited the assessor's web searches to reputable sources such as post-secondary institution pages, research papers, medical textbooks, and government resources. Finally, assessors were instructed to prioritize their intuition and reasoning over the guidelines when appropriate.
§.§ Prompt Generation
§.§.§ Baseline ChatGPT Answer Analysis
The first experiment of our approach established the foundation for all testing. To assess the ability of ChatGPT, we relied on the <cit.> API. To best simulate user interaction, each question was modified to yield an open-ended question format. This was done by removing the alphabetic answer options, adding the statement "In one sentence, answer the following question:" and replacing the "Which of the following" part of each question with "What." An example of this process is displayed in Figure <ref>, showing how the question in Figure <ref> would have been modified to generate the prompt. Both experiments utilized open-ended questions as their foundation. Finally, after passing the open-ended prompts through ChatGPT, a subset of questions was produced by filtering only questions whose ChatGPT answers were categorized as “Correct” by both assessors.
§.§.§ ChatGPT Robustness and Ablation Study Analysis
The second experiment was built on the foundation established in the first experiment. This experiment used the subset of aggregated questions categorized as “Correct” by both non-expert assessors. Using this subset of data, we generated new prompts by iteratively removing a sentence from the prompt and passing the modified prompt to ChatGPT. This process is repeated across all of a question's sentences, excluding the final question sentence indicated by the "What...?" structure. Given the example question in Figure <ref>, Figure <ref> shows an example of one iteration of the iterative sentence dropout where the first sentence has been removed.
§.§ Testing Methodology
Figure <ref> summarizes our testing approach and methodology.
As shown in Figure <ref>, an open-ended prompt was generated for the N_q=94 questions. Open-ended prompting was selected as individuals searching for medical information typically do not include a list of potential solutions. We denote this dataset as 𝒟 made up of x_j questions, o_j answer options, and correct answer y_j, such that 𝒟={x_j, o_j, y_j }_j=1^N_q.
Each question prompt, x_j, was passed to ChatGPT via the <cit.> API as a unique session to ensure no continuity between questions. All testing was done between February and March 2023 using the “text-davinci-003” model. We selected the “text-davinci-003” model as it was the model that promised the best performance and allowed for parameter adjustments. The temperature for each ChatGPT API call was set to zero to make the answers more focused and deterministic. A summary of the ChatGPT model selection and parameters are summarized in Appendix <ref>.
After collecting and processing the answer from ChatGPT, a non-expert assessor independently evaluated each ChatGPT answer, ŷ_j. The assessors were instructed to provide a label z ∈𝒵; 𝒵 = {C, PC, I, A} denoting whether the answer, compared to the ground-truth, y_j, was “Correct”, “Partially Correct”, “Incorrect”, or “Ambiguous”, respectively.
The assessment process can be expressed as a function f^k(ŷ_j,y_j)=z_j^k, for the k^th assessor. After the k non-expert assessors finished the categorization process, these categorizations were aggregated based on decided categorizations. Therefore, we defined a subset, ℬ, consisting of questions, x_j, categorized as “Correct”, z_j^k=C, by all k assessors. The process can be expressed as follows:
ℬ={x_j|z_j^k=C ∀ k ∈{1,2 }}
Completing the “Correct” answer aggregation process concluded the first experiment – Baseline ChatGPT Answer Analysis.
Our second experiment, ChatGPT Robustness and Ablation Study Analysis, was built upon the dataset and assessments completed during the first experiment. We conducted an ablation study over the questions within the subset ℬ via iterative sentence dropout. The ablation study aimed to simulate how a self-diagnosing individual may neglect to include certain information within their health-related searches. As a means of simulation, we would perform |i | iterations over an open-ended prompt x_j, where |i | is the number of i sentences in the prompt. For the i^th iteration, we would remove the i^th sentence from the prompt before running the prompt through ChatGPT, as expressed by x_j ∖ i, where x_j∈ℬ. All i sentences would be iteratively removed and processed with a notable exception when the i^th sentence contained the “What...?” sentence structure indicating that the i^th sentence was the question sentence which is mandatory for ChatGPT answer generation. We present this process as pseudocode in Algorithm <ref>.
After processing the iterative sentence dropout prompts through ChatGPT, non-expert assessor categorization was required. Identically to the first experiment, non-expert assessors were instructed to categorize ChatGPT's answers for the sentence dropout ŷ_j^'. Therefore the k assessors would process the output results as expressed by the function f^k(ŷ_j^',y_j)=z_j^'k, where z_j^'k is the categorization by assessor k for answer ŷ_j^'.
Similar to the first experiment stage, we completed further answer aggregation for answers categorized as “Correct” by the non-expert assessors. Therefore, we define a new subset, 𝒲, consisting of iterative sentence dropout questions x_j ∖ i, categorized as “Correct”, z_j^'k=C by all k assessors. Therefore, this aggregation process can be expressed as follows:
𝒲={x_j ∖ i|z_j^'k=C ∀ k ∈{1,2 }}
Completion of final question aggregation into subset 𝒲 marks the conclusion of the second experiment.
§ RESULTS
§.§ Experiment 1: ChatGPT Responses on USMLE Step 1 Open-Ended Prompts
Our first experiment explored the 94 open-ended prompts, x_j. Each prompt was individually processed with the ChatGPT API <cit.> on the “text-davinci-003” model. After independent assessor categorization, we obtained the results displayed in Table <ref>.
We chose to have ChatGPT evaluate and categorize its answers as a means of comparison. For self-evaluation purposes, we utilized the “gpt-3.5-turbo-0301” model as this model was the best and “most capable” model at the time of testing <cit.>. Additionally, as we did not require parameter fine-tuning and the “gpt-3.5-turbo-0301” model was trained on more data than the “text-davinci-003” model used for question answering, we wanted to give ChatGPT the best chance for success. Figure <ref> shows the categorization results of the assessors and ChatGPT's self-assessment.
As we are only concerned with answers categorized as “Correct,” we chose to simplify the results shown in Table <ref>. This simplification was accomplished by aggregating answers categorized as “Correct” by both assessors into one category and then summarizing all other combinations of categorization into “Other.” We determined that only nine out of the 94 questions were categorized as “Correct” by both assessors. Table <ref> displays the confusion matrix with these results.
We considered inter-assessor agreement (or intercoder reliability <cit.>) to evaluate our assessor reliability and agreement. For this study, we relied on simple agreement, denoted by ρ.
To determine ρ we utilize ℬ^', the complement of ℬ, such that:
ℬ^'={x_j| z_j^k = z_j^ℓ, z_j^k≠ C ∀ k,ℓ∈{1,2 } and k ≠ℓ}
Therefore, ρ can be computed as follows:
ρ = 100( | ℬ| + | ℬ^'|/| 𝒟|)
= 100( 9 + 69/94)
= 82.98%
§.§ Experiment 2: ChatGPT Robustness and Ablation Study Analysis
Our second experiment relied on the results of the first experiment. We performed an ablation study via iterative sentence dropout on the ℬ dataset. After processing the sentence dropout prompts through ChatGPT, non-expert assessors independently categorized the answers producing the results displayed in Table <ref>.
As a means to evaluate the robustness and sensitivity of ChatGPT, we had ChatGPT independently categorize its own answers. As previously discussed in Subsection <ref>, we used the “gpt-3.5-turbo-0301” model for the self-assessment. Figure <ref> displays the results of the categorization for the assessors and ChatGPT.
Similarly to the first experiment, the values presented in Figure <ref> were further simplified by aggregating the results into one of two categories – “Correct” or “Other.” All questions in 𝒲 were grouped into the “Correct” category and the remaining questions were placed in the “Other” group. Table <ref> displays this simplification in a confusion matrix form.
For consistency, we calculated the simple agreement, ρ, for these results. To determine ρ we rely on 𝒲^' where the assessors were in agreement but the answers were not categorized as “Correct” by all:
𝒲^'={x_j ∖ i|z_j^'k = z_j^'ℓ,z_j^'k≠ C ∀ k ∈{1,2 } and k ≠ℓ}
Therefore, ρ can be computed as follows:
ρ = 100( | 𝒲| + | 𝒲^'|/| ℬ|)
= 100( 36 + 14/56)
= 89.29%
§ DISCUSSION
In this section the primary focus is on the drawbacks and risks involved with using ChatGPT for (self-)diagnosis. Our aim is not to discredit self-diagnosis, as we understand that it is critical to healthcare and society as many individuals lack easily accessible healthcare, seek communities and shared understanding, among numerous other reasons.
§.§ Concerns of Misuse
ChatGPT has received great praise for passing medical examinations <cit.>. From these results, researchers have proposed employing ChatGPT in areas such as medical education and medical report creation. However, ChatGPT's ability to answer exam questions does not necessarily translate to true medical understanding and skill. Our results demonstrate that ChatGPT answers only a small subset (17/94)
of the questions correctly across two independent evaluators (see Figure <ref>).
While <cit.> evaluate performance on open-ended questions, they effectively a) only use a binary categorization for the responses (correct/incorrect) since they censor any responses which are ambiguous to conclude that ChatGPT passes the exam, and b) use a consensus process for evaluators instead of a complete independent assessment. We note that these lead to a far more optimistic evaluation of ChatGPT responses. On the contrary, we do a more critical and fine-grained analysis of ChatGPT responses via a completely independent assessment methodology by human evaluators.
Unlike practicing clinicians, ChatGPT is not tested and vetted on its ability, has not received accredited medical education, licensing or approval to work, nor has it proven the understanding or skillset required to substantiate its claims. We remain unable to verify the validity of the resources ChatGPT uses to generate its answers because the data on which it has been trained is unknown. If clinicians make mistakes, they are liable and could face severe consequences such as being charged with medical malpractice or losing their license. ChatGPT has no such liability that should it make a mistake, the lack of an accountability mechanism leaves little motivation to make the model more fit for use in medical contexts.
§.§ Healthcare Misinformation
Our fine-grained scale, which makes a distinction between "Correct" and "Partially Correct," reveals that the majority of ChatGPT responses are incorrect, closely followed by partially correct responses as shown in Figure <ref>. In the healthcare context, the model's performance in its current form is extremely concerning but not surprising since these are not reasoning about the information as clinicians do.
Qualitatively, we observe that ChatGPT fails to indicate any uncertainty in its answers, offering either a direct response to the prompt or priming responses with statements of “most likely” or “likely.” ChatGPT's form of answering questions can be explained with reference to two structures: that of the English language and that of ChatGPT's prompt engineering. Firstly, matter-of-fact statements made by ChatGPT can be interpreted as a symptom of how we provide answers in the English language by re-stating the key information presented in the question at the beginning of our answer. ChatGPT's use of this formality often communicates certainty in its answers where the answer is incorrect. Secondly, ChatGPT's prompt engineering involves the probabilistic filling of slots to produce an outcome rather than crafting a response from scratch <cit.>. The use of pre-determined sentence structures also contributes to the projection of certainty. Failure to indicate any doubt is particularly concerning with regard to lay user interaction, where a baseline of medical knowledge and the ability to decipher between true and false information cannot be assumed.
Our results substantiate the concerns raised by <cit.> regarding the subtle forms through which LLMs such as ChatGPT may disseminate health misinformation. Though various studies have suggested that ChatGPT can be used for educational purposes <cit.>, namely through supporting the training of medical students and healthcare workers, they fail to aptly consider the negative effects of ChatGPT's inaccuracies in this use case. Suppose ChatGPT is used to train medical students and healthcare professionals in its current state. In that case, it may prevent these individuals from absorbing accurate information and, if left unchecked, facilitate misinformed conceptual understandings. Evidently, there is a risk that this phenomenon may adversely affect the treatment offered to patients by a healthcare provider who has used ChatGPT to supplement their knowledge.
Numerous works have suggested using AI as a means of combatting misinformation <cit.>. However, as discussed, using LLMs such as ChatGPT introduces a new form of medical misinformation. Even recently, despite efforts to use deep learning models in debunking misinformation about COVID-19, there is a lack of research on how to help the general public detect misinformation and improve their trust at the same time <cit.>. As ChatGPT is a black-box system trained on an unreleased dataset, transparency and explainability are paramount to safe and trustworthy applications in healthcare. Machine learning research focused on incorporating expert reasoning is another alternative to counter the unintended addition of medical misinformation into these types of systems.
§.§ Overt Optimism and Ablation Study Insights
This paper aimed to evaluate the robustness and feasibility of applying LLMs, specifically ChatGPT, to healthcare diagnostic systems. In addition to the human evaluators, we also asked ChatGPT to grade its responses. Adding ChatGPT as an assessor helps us analyze whether it can be used for double-checking facts.
This initial comparison displays a common shortcoming of ChatGPT – its overly optimistic disposition. In the first experiment (Figure <ref>), the human assessors tend to err on the side of caution and categorize most answers as “Incorrect”. ChatGPT takes a very optimistic approach as 57.45% of the answers it evaluated were marked as “Partially Correct”. This overly optimistic tendency can lead to misleading and missed diagnoses (False Negatives), which can have dire consequences.
We see this trend continue in the evaluation of our second experiment results. As shown in Figure <ref>, assessors one and two categorized 64.29% and 75.00% of answers as “Correct”, respectively. On the other hand, the traditionally optimistic ChatGPT categorized only 50.00% of answers as “Correct”. Although we previously established ChatGPT as overly optimistic, in this case, we see that ChatGPT is optimistic when there is higher risk and more cautious in cases where that optimism is more welcome. This is the opposite of what we need from a healthcare application.
As an example, we can consider the answers provided for USMLE question number 108 displayed in Figure <ref>. When evaluating iterative sentence dropout for the question shown in Figure <ref>, some of the responses that both assessors categorized as “Correct” were categorized as “Partially Correct” by ChatGPT. A sample of some of the answer evaluations are summarized in Table <ref>.
From the sample answer categorizations shown in Table <ref>, we can see how ChatGPT lacks human intuition and is cautious when it is unnecessary. In this case, we see that ChatGPT's answer includes information in addition to the correct answer regarding exercise and healthy diet. This case highlights the importance of human decision making and intuition. Even though the assessor guidelines indicate that answers with additional content should be considered as “Partially Correct,” that reasoning should be overruled anytime human intuition and understanding are justified. See Appendix <ref> for more information.
Based on human experience, a family doctor telling a patient to regularly exercise and maintain a healthy diet is nothing out of the ordinary. Regardless of ailment, this sort of advice is commonplace for many patients. Therein, both assessors justified their answers along these principles as this additional information, while unnecessary, does not logically negate the correctness of the answer. Conversely, ChatGPT approaches this answer with caution and categorizes the answer as partially correct for both cases. This discrepancy is just one example of many where ChatGPT is, at times, unnecessarily cautious then otherwise overly optimistic when risk is elevated.
As reflected in Table <ref>, the removal of a single sentence as an ablation study, impacts the perceived correctness of ChatGPT's answer. Therefore, we can see that even minimal modifications impact ChatGPT's ability to answer medical questions. This is especially concerning as these questioned are written to be posed to medical students who have an assumed knowledge base that wouldn't be found within the general population. Therefore, it could be assumed lay users would not phrase their concerns in this manner.
Limitations
Although the USMLE Step 1 dataset was used to simulate an individual self-diagnosing, this dataset may still be too explicit compared to typical web searches. We attempted to remedy this concern within our ablation study, yet we believe that singular sentence dropout likely still provides more information than what an individual would provide. A valuable future extension or work could aim to gather samples of real user searches relating to self-diagnosis as this would allow direct application rather than reliance on synthetic data. Next, by design, LLMs tend to do well on tasks which are well represented in the training; therefore, niche problems may have even lower performance <cit.>. This also raises an equity issue, where ailments or questions related to underrepresented groups may suffer from poor performance. While we do not explicitly study this task, future work can consider variations across medical specializations.
§ CONCLUSIONS
While large language models make headlines for passing medical licensing exams, and are consequently being considered as candidates to train the next generation of healthcare professionals, we find that LLMs' abilities are (understandably) modest at this time. More importantly, this misplaced trust in these systems can lead to reliance on their use for self-diagnosis by the general public. Our critical analysis of ChatGPT's performance on a medical licensing examination reveals a more humbling picture, where we find that ChatGPT is optimistic when there is higher risk, while it is more cautious in cases where that optimism is warranted. Unfortunately, this can lead to new forms and modes of healthcare misinformation. This examination therefore opens up challenges for ML researchers to build AI-powered models that can reason and respond responsibly, while providing ample insights for clinicians and the general public about their decision-making.
§ ASSESSOR PROCEDURES AND GUIDELINES
§.§ Overview
This document aims to provide the needed information to help an individual (the assessor) review answers from ChatGPT (a LLM) based on USMLE (US Medical Licensing Examination) questions. The goal of this task is to, to the best of your ability, categorize ChatGPT’s responses to the open-ended USMLE questions.
§.§ Categorization
For each question, ChatGPT will provide an answer. The assessor must then group the answer into one of four categories. The groups are presented, with their unique character identifier specified in brackets, as follows:
* Correct (C)
* Partially Correct (PC)
* Incorrect (I)
* Ambiguous (A)
§.§ General Guidelines
Often times non-expert users self-diagnosing will rely on web searches for research. Therefore, the categorization task may require additional research and web searches by the assessor to confidently make a decision. When searching for terminology, make sure to only rely on reputable sources. Examples of such sources include but are not limited to:
* Post-secondary institution pages (with a .edu URL)
* Medical School pages such as John Hopkins
* Research papers and medical textbooks
* PubMed
* ScienceDirect
* Etc.
* Government and Health Institutions
* National Institute of Health (and other .gov pages)
* Mayo Clinic
* Etc.
When additional resources are required, the goal of the assessor is not to gain an exhaustive understanding of the topic, rather an understanding that is “good enough” is sufficient to determine, with confidence, which category is most appropriate. Luckily, in practice, this task is far easier than it may appear as searches consisting of the terminology tend to present useful and trustworthy resources. Often it may be useful to search for “<correct answer> vs <ChatGPT answer>” where <correct answer> is the correct multiple choice answer and <ChatGPT answer> is the primary term or topic that ChatGPT presents as its answer. Finally, if you absolutely must, Wikipedia may be used as a sanity check, but avoiding using Wikipedia as the sole factor for decision-making is advised.
§.§.§ Additional Considerations
When determining the "correctness" of an answer, there are two rules that overrule other factors:
* If categorization is possible or aided by sound intuition, rely on human intuition for decision making.
* Some prompts have "human" factors in play such as questions that ask for "What is the most appropriate response", or similar. In these cases, human intuition and implicit understanding may overrule other guidelines presented in this document.
* If the answer includes an answer that is present among the multiple-choice options, it is automatically “Incorrect'.'
* Although the prompt is presented to ChatGPT in as an open-ended question, discernment is critical to diagnostics. Therefore, as we are aware of the intended “Correct” answer, the inclusion of other answers in the response will be “Incorrect” as they would be considered “Incorrect” if an individual were to write the USMLE test as a stand-alone examination.
§.§ Categories
In the following subsections, we will discuss each category and what level of “correctness” is required for that answer to be categorized as such. For each category, examples of theoretical ChatGPT outputs will be provided with an explanation for why the selected category is correct. Additional information may be included in the tables but will be explained as needed. Each of these examples will be based on the example in Figure <ref>:
§.§.§ Correct (C)
Answers labelled as “Correct” (C) must be obvious and correct within the context of the provided solution. As ChatGPT is prompted to explain, there are instances where ChatGPT may provide an initial answer in the first couple of sentences that is vague. Still, the explanation can aid in creating a distinction between categories. Lastly, for an answer to be labelled as correct, the assessor must rely on intuition and common sense to determine if the answer is presented to an acceptable level of correctness to be categorized as such.
Given the provided pet example shown in Figure <ref>, Table <ref> provides examples of what would be considered “Correct” answers.
p3cmp5.25cmp5.75cm
Examples of “Correct” answers with their categorization “type”, example answer, and categorization rationale.
Type Example ChatGPT Answer Categorization Rationale
Type Example ChatGPT Answer Categorization Rationale
Direct, correct answer The child’s pet is most likely a dog. This is because dogs are the most likely pets to be taken for walks and are known for enjoying fetch. In this case, the ChatGPT answer is direct and clear. The first sentence is correct, and the explanation provides reasoning without modifying the answer.
A correct answer with extensions The child’s pet will most likely be a dog. This is because dogs are the most likely pets to be taken for walks and are known for enjoying fetch. The child may also enjoy raising cats or birds. While pets provide different challenges, they enjoy interactions with their owners and their own means of physical exertion. Although the theoretical ChatGPT answer provides additional extensions, this does not negate the correctness of the answer. The correct answer is still provided in this case, and the additional information does not nullify that answer. ChatGPT sometimes adds additional details, but these should be considered extensions as they do not invalidate the original answer.
A correct answer with specific examples The child’s pet is most likely to be a dog. As indicated by the hair, the dog may be a breed such as a Maltese or Poodle. These dogs have hair instead of fur coats typically associated with lower shedding and hypoallergenic breeds. In this case, the theoretical output correctly identifies the answer while providing additional unrequested information. Therefore, adding examples does not invalidate the correctness of the original statement. Although uncommon, the examples, in this case, do not change the overall statement and answer produced by the ChatGPT output.
A correct answer with a synonym The child’s pet is most likely a Canis lupus familiaris. Canis lupus familiaris is a mammal often encountered as a domesticated pet. This is because Canis lupus familiaris is the most likely pet to be taken for walks and is known for enjoying fetch. While not the exact terminology, the theoretical ChatGPT answer utilizes the binomial name of dogs rather than explicitly stating “dog.” ChatGPT may include alternative nomenclature that is still correct, as they mean the same thing as the identified answer.
§.§.§ Partially Correct (PC)
Answers labelled as “Partially Correct ”(PC) are close to being fully correct but fall short of being entirely accepted as correct. This distinction typically occurs in one of three ways:
* Broader scope, general, answer that includes the correct answer.
* A subset of the correct answer.
* The correct answer with additional answers.
To better understand these three cases, we will utilize a general example with answers that could be provided that would make the answer partially correct. Please consider the general example question shown in Figure <ref> as a reference for understanding the rationale for grouping answers as “Partially Correct” shown in Table <ref>.
p3cmp5.25cmp5.75cm
Examples of “Partially Correct” answers with their categorization “Type”, example answer, and categorization rationale.
Type Example ChatGPT Answer Categorization Rationale
Type Example ChatGPT Answer Categorization Rationale
Broader Scope The child’s pet is a mammal. The pet is indicated to have hair which is common among household pets such as dogs, cats, rodents, and many others. While a dog is a mammal, the example answer is far too broad of a scope to be considered a correct identification. This instance would be partially correct as the provided answer is logically sound and void of mistakes yet is far too broad and general to be accepted as correct.
A subset of the correct answer The child’s pet is likely a Maltese or Poodle, as indicated by the presence of “hair” rather than “fur.” Additionally, Maltese and Poodle dog breeds are intelligent breeds known to be great companions and require physical exercise. Taking these dogs for walks and playing fetch are excellent activities for stimulating and exercising these dog breeds. Although Maltese and Poodle are known dog breeds, the specificity would make this answer partially correct. Similar to the broader scope case, this answer is both logically sound and void of errors, but the overly specific answer is considered a subset of the correct answer. In other words, a Maltese or Poodle is a type of dog but does not state “dog” as an answer and would be partially correct. While this example may seem extreme, in the healthcare domain, an overly-specific answer that is a subset of a higher-level diagnosis may also be incorrect or a decision that cannot be made with specialist aid.
A correct answer with additional answers The child's pet is likely a dog or a cat. Dogs and cats are known to be great pets for children and adults alike. While uncommon, both dogs and cats can be trained to play games such as fetch and taken on outdoor walks. In this case, although the correct answer, “dog,” is present, an additional (incorrect) answer, “cat,” is also present. In this case, while the correct answer is present, the additional answers prevent this answer from being considered fully correct.
§.§.§ Incorrect (I)
The “Incorrect” (I) category is reserved for incorrect answers. Answers that may be incorrect for multiple reasons, but the general understanding should be “if an answer falls into neither the `Partially Correct' nor `Correct' category, the answer is `Incorrect'.” Additionally, as the original question used to generate the open-ended prompt is multiple choice, if the provided answer is another answer (provided in the multiple-choice question) other than the correct answer, it is automatically considered incorrect. As a non-exhaustive list, examples of categorization into the “Incorrect” category are provided in Table <ref>.
p3cmp5.25cmp5.75cm
Examples of “Incorrect” answers with their categorization “Type”, example answer, and categorization rationale.
Type Example ChatGPT Answer Categorization Rationale
Type Example ChatGPT Answer Categorization Rationale
Explicitly Incorrect The child’s pet is most likely a cat. Cats are known to be great pets and interact well with children. Although uncommon, cat owners sometimes will take their cats on walks and teach their cats how to play fetch. In this case, ChatGPT’s solution is explicitly incorrect. While the rationale is concrete, the most likely and desired answer “dog(s)” is more typically representative of descriptions provided in the question.
Incorrectness caused by symptom identification The child is most likely to experience a sense of companionship and an improved sense of responsibility. Owning and taking care of pets, such as dogs or cats, requires a great deal of responsibility and the child can learn responsibility working with a pet. Likewise, working with pets can provide a sense of companionship between the child and their pet. In this case, ChatGPT focuses on a symptom or a case that is a result of the correct answer. While the answer does discuss dogs in the context of pet ownership, the main focus is on aspects concerning companionship and responsibility. While a similar case may be partially correct, this instance would be considered incorrect as the focus is on companionship and responsibility - symptoms that are not solely associated with the root cause. Therein, the correct solution is not present or adequately represented. Unlike the partially correct example, this example is incorrect as the solution is also void of any subset of the correct answer.
Incorrect due to irrelevance The child is most likely to do athletics due to their enjoyment of taking their pet for walks and playing fetch. As these activities require physical exertion, the child is cultivating an active foundation and would likely enjoy athletic activities. This answer is explicitly incorrect as the theoretical ChatGPT answer misinterpreted the question and provided an irrelevant answer. This answer does not address the proposed question and focuses on an irrelevant topic.
§.§.§ Ambiguous (A)
The “Ambiguous” (A) is the final category. Any answer that the assessor is absolutely uncertain about should be categorized as “Ambiguous” when categorizing the content. The ambiguous category should be reserved and considered as the last-case scenario option. Finally, as per its namesake, if an answer is truly ambiguous, it should be labelled as such.
§.§ Evaluation Steps
In this section, we will outline what is expected of the assessor for each question. We will use this section to provide general steps and best practices to obtain the highest degree of agreement across assessors.
§.§.§ Setup
When grouping solutions, it is vital that a few guidelines are followed:
* This task is an individual task.
* Please ensure that the marking decisions are made by yourself alone and not influenced by a discussion with other markers.
* Ensure you only view your own spreadsheet sheet while assessing. Refrain from looking at others’ evaluations while completing this activity.
* Do not ask for help from third-party entities or groups (such as clinicians or medical experts) unless instructed.
* It is vital that all the categorization activities are decisions made by a single assessor alone.
* Ensure that you understand the categories and what dictates how an answer should be categorized.
With the guidelines in mind, you will find the spreadsheet software presented as shown in the example in Figure <ref>.
As shown in the example in Figure <ref>, you will need to consider the contents on the screen. For a detailed understanding, a summary of the contents are as follows:
* q_num
* The question number on the USMLE test.
* This value can be ignored by the assessor.
* q_text
* The cells in this column include the original question text used to generate the prompt for ChatGPT. For these tests, multiple-choice, single-answer questions were selected and modified to be presented as open-ended questions.
* Note: Before presenting the prompt to ChatGPT, each question would have had the multiple-choice answers removed and the last sentence modified to present the prompt as an open-ended question.
* correct_answer
* This column displays the correct answer corresponding to the letter options provided in the multiple-choice question.
* In the case of the provided example, the correct answer is “B” or “Example Answer B.”
* usmle_1_a_category
* The cells in this column are where the assessor should indicate their decided category.
* In the case of the example, the assessor, would have determined that the answer output by ChatGPT was incorrect and would indicate as such with “I.”
* usmle_1_a_0
* The cells in this column present the answer provided by ChatGPT.
* The marker will primarily focus on the content in this cell and compare those results to the correct answer to determine the solution category.
§.§.§ Evaluation
After ensuring your setup is complete, you will be ready to evaluate the answers. Please familiarize yourself with the provided example presented in Figure <ref>, as references will be made to the names of columns presented in that section.
For each answer, the following steps are recommended to achieve accurate groupings while avoiding wasted time.
* Read the last sentence of the question (in q_text).
* Although the prompt provided to ChatGPT will be slightly modified, understanding what ChatGPT was asked for will significantly aid in the categorization process.
* Read the first two-to-three sentences of the solution (usmle_1_a_0) and check if the correct answer is directly provided.
* Informally, determine the answer’s category based on the initial observed content.
* Read through the entire solution (usmle_1_a_0), verifying that the primary presented solution is consistent throughout the answer.
* There are instances where ChatGPT may appear to provide the correct answer in the first sentence, but further reading indicates this not to be the case.
* If further information is required, perform a web search on the presented solution and the correct solution.
* Utilizing web information, intuition, and understanding to determine the most appropriate category.
* Repeat start from Step 1 for each question.
Additional Notes:
* If you find that you are stuck on a question for any reason, highlight the question cell and continue categorizing other answers. You can always come back later.
* Do your best to follow the logic and reasoning methodology outlined in this paper.
§.§.§ Completion
Upon completion please report back to the primary contact who will be indicated to you through external communication. Depending on the results and inter-assessor variability, further discussions may be requested to maximize consistency across assessors.
Thank you for reading through this document and we appreciate your help in reviewing this data.
§ CHATGPT MODEL SELECTION PROCESS AND MODEL PARAMETERS
In this study we determined that the “text-davinci-003” model would be best suited for our purposes. As of April 2023, the improved GPT-3.5 and GPT-4 models do not provide fine-tuning options and GPT-4 is currently only in a limited beta stage <cit.>. As a means of repeatability, all of our API calls were made between February and March 2023 using the parameters summarized in Table <ref>.
|
http://arxiv.org/abs/2307.03902v1 | 20230708044951 | Feature selection simultaneously preserving both class and cluster structures | [
"Suchismita Das",
"Nikhil R. Pal"
] | cs.LG | [
"cs.LG"
] |
[mycorrespondingauthor]Corresponding author
Electronics and Communication Sciences Unit, Indian Statistical Institute, 203 B T Road, Kolkata-700108
[email protected],[email protected]
When a data set has significant differences in its class and cluster structure, selecting features aiming only at the discrimination of classes would lead to poor clustering performance, and similarly, feature selection aiming only at preserving cluster structures would lead to poor classification performance. To the best of our knowledge, a feature selection method that simultaneously considers class discrimination and cluster structure preservation is not available in the literature. In this paper, we have tried to bridge this gap by proposing a neural network-based feature selection method that focuses both on class discrimination and structure preservation in an integrated manner. In addition to assessing typical classification problems, we have investigated its effectiveness on band selection in hyperspectral images. Based on the results of the experiments, we may claim that the proposed feature/band selection can select a subset of features that is good for both classification and clustering.
Feature selection, Structure preserving, Classification, Neural network, Sammon's Stress, Band selection, Hyperspectral Image.
§ INTRODUCTION
Feature selection methods can be broadly classified on the basis of the utilization of the class label information. There are three categories: supervised, semi-supervised and unsupervised <cit.>. The supervised feature selection method exploits the label information to find out the relevant features which distinguish samples of different classes <cit.>. Semi-supervised feature selection is used when some labeled samples along with plenty of unlabelled samples are present <cit.>. Both labeled and unlabelled data are used to modify a hypothesis obtained from the labeled data <cit.>. Unsupervised feature selection is much more difficult as it needs to find out the useful features in the absence of the label information <cit.>. Different criteria have been chosen to select a subset of original features in different unsupervised feature selection studies. Some of them are: preserving the data distribution such as manifold structure <cit.>, preserving cluster structure <cit.>, and preserving data similarity <cit.>. It is noteworthy that in the case of unsupervised feature selection, some methods try to preserve the “structure" or “geometry" of the data in some sense. Contrarily supervised feature selection methods in most cases do not set any explicit criteria to preserve the structure of the data. They only pay heed to separating the classes as much as possible with different measures exploiting class information such as Fisher score <cit.>, Laplacian score <cit.>, mutual information <cit.>, normalized mutual information <cit.>, ReliefF <cit.>, class correlation <cit.>, classifier score <cit.>. We should note here that the feature selection criterion are not always lead by a single objective. Feature selection methods often follow a criterion that consisits of two or more objectives. The study in <cit.> proposes a criterion named `maximum projection and minimum redundancy' which is governed by two goals: projecting data into a feature subspace with minimum reconstruction error and minimum redundancy. The studies in <cit.> claim that both global structure and local structure should be preserved in the projected space as both them may carry important discriminating information and hence, they have proposed feature selection schemes that focus both on global and local structure preservation. The investigation in <cit.> claims to preserve dual global structures. Going through various feature selection schemes having multiple objective we found that whenever class label is available, no work in feature selection explicitly focused preserving structural information along with class information although both of these are important discriminative information and may have positive impact on the generalization ability of the classifier. Suppose, for a data set, the class and cluster structures are substantially different. Exploiting only the class labels, it may not be possible to keep the cluster structures in the projected space. For a practical system, even when the primary task is classification, we may need to cluster the samples in the space defined by the selected features. For example, fuzzy rule based classifiers are often designed by clustering the training data for each class and translating each cluster into a rule<cit.>. We could not find any feature selection method that focuses both on class and cluster separability. To bridge this gap, in this study we propose a feature selection method that selects features preserving class and cluster-structure information simultaneously. We employ a multi-layer perceptron (MLP) based neural network to develop an embedded feature selection scheme. The training of the proposed MLP based feature selection method is governed by both class discriminating and cluster (structure) preserving objectives. The philosophy is quite general and can be easily extended to other networks such as radial basis function network.
§ PROPOSED METHOD
Let us denote the input data by an n× P matrix, 𝐗={𝐱_i∈ℝ^P}_i=1^n. Here, 𝐱_i is a P dimensional row vector of the form, 𝐱_i=(x_i1,x_i2,⋯,x_iP). The collection of class labels of 𝐗 be 𝐙={z_i∈{1,2,⋯, C}}_i=1^n, where, z_i is the class label corresponding to 𝐱_i. We aim to select a subset of size Q from the original set of features such that the selected subset performs reasonably well in terms of the classification task as well as in clustering. In other words, if we design a classifier using the selected features, the performance of the classifier would be comparable to a classifier designed using all features. Similarly, if we cluster the data in the reduced dimension as well as in the original dimension we expect to get similar partition matrix. Here, we propose a neural network-based framework to select features. Neural networks have been explored for the feature selection <cit.> as well as for classification <cit.>. However, in our proposed model the neural network simultaneously selects features and learns a classifier as we follow an embedded method for feature selection. Moreover, our proposed network preserves structural information and the class label information simultaneously, whereas, the feature selection networks in <cit.> solve classification problems, consider class label information in their loss function but not any structural information. Note here, the work in <cit.> considers a system identification problem. To build the neural network-based embedded feature selector, we employ the multi-layer perceptron (MLP) based framework used in <cit.>. The basic framework is shown in Fig. <ref>.
As seen in Figure <ref>, preceding the input layer of the MLP, there is a layer consisting of P nodes. Before entering to the input layer of the MLP, the jth feature passes through the node f_j(). These nodes act as attenuating gates that allow or block features from contributing to the output of the neural network effectively. For the ith instance, it's jth feature x_ij on passing the gate node f_j() becomes a_jx_ij; i.e., f_j(x_ij)=a_jx_ij. In MLP, a weighted sum of the values available at the input nodes is applied to the hidden nodes of the first hidden layer. Zero value at an input node implies that the corresponding feature is not considered. When training of the MLP-based framework is complete, a_js for the selected features become close to 1, effectively allowing them to contribute to the classifier. Whereas, for poor or rejected features a_js become close to 0, effectively making them not contribute to the classifier. In <cit.>, this framework was explored for classification-oriented feature selection, group feature selection, and redundancy-controlled feature selection. Here, we explore this framework for simultaneous structure-preserving and class-discriminating feature selection. Next, we elaborate on the MLP-based framework and the proposed objective functions to train the network.
We denote the P nodes before the input layer of the MLP as f_j()s for j=1,2,… P where f_j() is a gate or modulator function applied on the jth feature, x_j. Now, we have to design f_j() in such a way,
f_j(x_j)=a_jx_j=x_j if x_j is a useful feature.
0 otherwise.
In our framework, the factor, a_j is learnable. We implement a_j as a smooth continuous function, a_j=exp(-λ_j^2). Clearly, when λ_j=0, the value of exp(-λ_j^2)= 1 and when λ_j→±∞, the value of exp(-λ_j^2)=0. By adding suitable regularizer terms to the objective function we design our learning system in such a way that, over the learning process, the gate parameters, λ_js for useful features drop close to zero and that for derogatory or indifferent features rise to high values. So, in our learning system, the learnable parameters, λ_js and the neural network weights are learned together, i.e., the loss function is minimized with respect to both λ_js and the neural network weights.
Now, we have to define a suitable loss function for selecting features along with learning the embedded classifier. Our aim is to select features that are reasonably good for classification as well as clustering. To satisfy this requirement, we take the loss function as a combination of two losses E_class and E_struct. E_class is considered for preserving class information and E_struct is considered for preserving structural information. At this moment let us consider the network for selecting features for efficient classification only. A suitable loss function to impose class discrimination is the cross-entropy loss <cit.>. We define, E_class as the cross-entropy loss involving actual and predicted class labels.
E_class=-1n∑_i=1^n∑_k=1^Ct^i_klog(p_k(𝐱_i) )
Here, t^i_k is kth element of the one-hot encoded label of the sample 𝐱_i or in other words t^i_k is the kth element of the vector 𝐭^i∈{ 0,1}^C such that
t_k^i=
1 if k=z_i
0 otherwise
In (<ref>), p_k(𝐱_i) is the predicted probability (by the MLP classifier) of 𝐱_i being in kth class. As already discussed above, for effective feature selection, the magnitude of λ_js for the selected features should drop to almost zero and for rejected features should rise to high values. To ensure this condition we add the following regularizer.
E_select = 1P∑_j=1^P a_j(1-a_j)
= 1P∑_j=1^Pexp(-λ_j^2)(1-exp(-λ_j^2))
In a feature selection framework, a constraint for selecting a fixed number of features is necessary. The following regularizer tries to keep the number of the selected features close or equal to Q.
E_Q=1Q^2{(∑_j=1^Pa_j)-Q}^2=1Q^2{(∑_j=1^Pexp(-λ_j^2))-Q}^2
So, the overall loss function for the selection of features with our framework for classification purposes is the following.
E= E_class + α_1E_select + α_2E_Q
Here, α_1≥ 0,α_2≥ 0 are scalar multipliers for adjusting the importance of E_select and E_Q in the overall error function E.
Now let us focus on our original agenda of selecting features that perform satisfactorily both for classification and clustering. To preserve structural information of the data in the lower dimensional space formed by the selected Q features, we consider the Sammon's stress <cit.> as a loss function. The Sammon's stress is the loss function for a non-linear mapping named Sammon's mapping that is able to capture complex non-linear structures in data, as a result, also preserves cluster structure. The lower the value of Sammon's stress, the better the lower dimensional representations in capturing the original inter-point distances or structures of the original data. We can define Sammon's stress involving the original input space and selected feature space as the following.
E_sammons=1(∑_i,l=1^n d_il)∑_i=1^n-1∑_l=i+1^n( d_il^𝐗- d_il^𝐗̂)^2d_il^𝐗
d_il^𝐗 is the distance between 𝐱_i and 𝐱_l. 𝐗̂={𝐱̂_i=(a_1x_i1,a_2x_i2,⋯,a_Px_iP)^T∈ℝ^P}_i=1^n. So, d_il^𝐗̂ is the distance between 𝐱̂_i and 𝐱̂_l. As discussed earlier, at the end of the training of our embedded system, a_js will be close to 0 or 1 depending on whether the corresponding features are rejected or selected. Therefore, for a trained system d_il^𝐗̂ would signify the distance between ith and lth instances in the latent space formed by the implicitly selected Q features. So considering E_sammons in Equation (<ref>) as an regularizer, the resultant overall loss function is given by.
E_tot=E_class+ β E_sammons + α_1 E_select + α_2 E_Q
β≥ 0 is a scalar multiplier that controls the trade-off between the class information and the structural information in the feature selection process.
Note that, the computational complexity for the loss function in Equation (<ref>) is O(n^2). For large n, computing Equation (<ref>) and hence Equation (<ref>) is intensive. As the weight update at each iteration will involve computing Equation (<ref>), the overall computation cost would be high. For small and moderate n, we use Equation (<ref>) as the loss function to be minimized. However, for large n to avoid the high computational cost we modify Equation (<ref>) as follows.
E_struct= 1(∑_𝐱_i,𝐱_l∈ S_t d_il)∑_𝐱_i∈ S_t∑_𝐱_l∈ S_t; 𝐱_l𝐱_i( d_il^𝐗- d_il^𝐗̂)^2d_il^𝐗
Here S_t is a randomly selected subset of 𝐗 at the tth iteration. Different S_ts are chosen at different iterations and hence different sets of inter-point distances are preserved. Since the considered MLP is trained over a large number of iterations, the use of Equation (<ref>) is expected to result in almost the same effect as that by Equation (<ref>). We have to choose |S_t| such that Equation (<ref>) is computationally manageable and at the same time it should be large enough to make E_struct an effective substitute of E_sammons. Adding Equation (<ref>) to Equation (<ref>) we propose the following loss function for our system.
E_tot=E_class+ β E_struct + α_1 E_select + α_2 E_Q
E_tot is minimized with respect to the gate parameters λ_js and the weights of the network to find their optimal values.
§ EXPERIMENTATION AND RESULTS
The feature selection framework proposed in this chapter is generic but it can be adapted to solve specialized problems. We have studied the proposed framework for general datasets as well as for solving a special problem: band selection of hyperspectral images. We present the results of band selection for HSIs in a different subsection, Subsec. <ref>. We present the results of feature selection for the conventional classification problem in the following subsection (Subsec. <ref>).
§.§ Feature selection for conventional classification problems
We have used five publicly available datasets that are very commonly used for classification and clustering. The first four datasets are downloaded from UCI machine learning repository <cit.>. AR10P is downloaded from the open-source feature selection repository of Arizona State University<cit.>. We have also performed the experiments with three benchmark HSI datasets for land cover classification problems. We discuss them in a separate subsection (Subsec. <ref>).
The details of the number of features, number of classes, and number of instances for the five datasets are summarized in Table <ref>.
The datasets are used directly without any further processing. The datasets are partitioned into training and test sets as approximately 90% and 10% of the total number of instances. To implement our proposed feature selection scheme we use the neural network shown in Fig. <ref> with the number of hidden layers, n_H = 1. The input and output layers have P and C nodes respectively, where P is the number of features and C is the number of classes corresponding to the considered dataset. The number of hidden nodes in the hidden layer is 8 (20 for AR10P data set). To get stable feature selection results, the network weights are initialized in a certain way. To set the initial weights of the proposed network, we undergo the following steps. First, we consider the usual MLP part of our network (i.e. without feature selection), depicted by the portion within the dotted rectangle in Fig. <ref>, and initialize its weights randomly. Next, we train the usual MLP with the cross-entropy loss defined in Equation (<ref>) with the training set until convergence. The weights of the converged network are used as the initial weights of the proposed network. The gate parameter λ_js are initialized with values drawn randomly from a normal distribution with mean =2 and spread =1/√(P). The initial values of λ_js are chosen around 2 to effectively make the gates almost closed initially. As the learning progresses the λ_js are updated in a way to allow the useful features to the network. For the proposed system, to select a subset of Q features, the gate parameters λ_js are sorted in ascending order, and the Q features corresponding to the top Q, λ_js are selected. The network weights as well as the gate parameters λ_js are learned using the adaptive gradient algorithm, `train.AdagradOptimizer' routine of the `TensorFlow' framework <cit.>. For all experiments with the data sets in Table <ref>, both α_1 and α_2 of the error functions in Equations (<ref>) and (<ref>) are set as 1. The total number of iterations for training the network is set to 20000. The five datasets we consider here, have the number of instances n<400, which is not so large. Therefore, we use (<ref>) as the overall loss function to train the MLP based architecture for selecting features that are reasonably good for clustering and classification. When β=0 in (<ref>), effectively, the error function that governs the learning of our MLP based embedded feature selection scheme is (<ref>). The corresponding feature selection scheme now only considers classification. Let us name this method as feature selection with MLP (FSMLP). When β≠ 0 in (<ref>), our method takes structure preservation into account along with classification. Let us name the corresponding method as FSMLPstruct. To understand the importance of adding the structure preserving regularizer (<ref>), we perform feature selection with FSMLP and compare with FSMLPstruct having different β values. We explore three values of βs 0.1, 1, and 10. Although the exact value of the β that is optimum for a particular dataset for a particular number of selected features Q cannot be decided from these three values, we investigate the effect of three widely different βs to see the role of the weight to the structure preserving regularizer, i.e. β on the performance of the selected features. We compare with three other methods namely, Independent Component Analysis (ICA)-based feature selection <cit.>, F-score based filter method <cit.>, and mutual information based filter method <cit.>. The performance of both FSMLP and FSMLPstruct is dependent on the initial weights of the network. So, we repeat the initialization of the network weights and gate parameters λ_js five times and run the schemes- FSMLP or FSMLPstruct five times with the five initializations. For the performance measure of FSMLP and FSMLPstruct, we consider the average performance over the five subsets obtained from the five runs. To check the effectiveness of the methods in selecting features that perform well in classification and clustering simultaneously, we compute the classification scores of the support vector machine (SVM) classifier as well as several structure-preserving indices: Sammon's stress (SS) <cit.>, normalized mutual information (NMI) <cit.>, adjusted rand index (ARI) <cit.>, and Jaccard Index (JI) <cit.>. As the measure of classification performance, we use the overall classification accuracy (OCA) of the SVM classifier. The optimal hyper-parameters of SVM are determined through five-fold cross-validation using grid search. Note that here the test set is not only unseen to the SVM classifier but unseen to the feature selection methods also. SS, defined in Equation (<ref>) use the original inter-point distances d_il^𝐗s and latent space inter-point distances d_il^𝐗̂s. Here to compute d_il^𝐗̂, we use the lower dimensional data formed by the selected Q features. We use NMI, ARI, and JI as the structure-preserving performance metrics by supplying the cluster labels obtained from clustering the data in the original space (using all features) as the true label and the cluster labels obtained from clustering the data in the reduced space formed by the selected Q features as the predicted cluster label. So, NMI, ARI, and JI measure how the cluster assignments in the original space and in the selected space agree, effectively giving a measure for the preservation of the original cluster structure in the selected space. We know that the maximum value for NMI or ARI or JI is 1. Here, the value of each of these three measures being close to 1 indicates that the cluster structure in the original space is preserved in the selected space. As the clustering algorithm we use, the fuzzy C means (FCM) algorithm <cit.> with the fuzzy exponent m=2. We set the number of clusters for FCM algorithm as the number of classes. We use two values for the number of the selected features, Q. Q=0.35 × P and Q=0.5 × P, where these values are rounded up to the nearest integers using the ceiling function.
Tables <ref> and <ref> summarize the performances of the proposed method and other comparing methods for training and test sets, respectively for the E. coli dataset. We tabulate the three previously mentioned structure preserving measures and one classifier score for two choices of the number of selected features (approximately 35% and 50% of the original dimension) i.e., Q=3, and Q=4 in Tables <ref> and <ref>.
As we have already discussed in Sec. <ref> the lesser the value of SS, the better the projected space (formed by selected features) preserves the original pairwise distances and hence the structure of the original data. We observe in Table <ref>, the mutual information based method shows the lowest value of SS, and the second lowest is FSMLPstruct with β=10 for both Q=3 and Q=4. Actually, the SS values for the mutual information based method and FSMLPstruct with β=10 are almost the same, equal up to two places after decimal points in both choices of Q. The SS values achieved by ICA, the F score based method and FSMLP are comparatively higher. So, the mutual information based method and FSMLPstruct with β=10 preserve the original pairwise distances most in the projected space. They are also expected to preserve the structures most. The values of the other three structure preserving measures i.e., NMI, ARI, and JI confirm that. We know that the higher the values of NMI, ARI, and JI are, the closer the clustering structures of the projected space are to the original clustering structures. The highest values of the NMI, ARI, and JI are obtained by the mutual information based method, followed by the FSMLPstruct with β=10. So, in cluster structure preservation, mutual information based method and FSMLPstruct with β=10 perform better than the other three methods and even than the other two models trained by FSMLPstruct with β=0.1 and 1. β is the weight of the regualizer E_sammons in (<ref>). Although SS and E_sammons are not exactly same, under the influence of E_select, it is expected that the higher the value of β lesser the value of SS would be. Table <ref> reconfirms that. The SS values become lesser as the β increases from 0.1 to 10. SS values of FSMLP (which is basically FSMLPstruct with β=0) and FSMLPstruct with β=0.1 are the same for both the choices of Q. Actually, FSMLP and FSMLPstruct with β=0.1 have all the ten measures the same. It proves that for the E.coli dataset β=0.1 does not give any effective weightage to the structure preservation term and chooses the same subsets as FSMLP. For the classification performance measure OCA, FSMLPstruct with β=1, achieves the highest value, followed by FSMLPstruct with β=10. The mutual information based method and FSMLPstruct with β=10 have all the structure preserving measures either almost equal or of comparable values, however for OCA, FSMLPstruct with β=10 is better than mutual information based method with a margin more than 18%. For E. coli data, the test set follows the observed trends in the training set with the following exceptions. First, for Q=3, the values of NMI, ARI, and JI have not increased as β increases from 0.1 to 10. Second, for Q=4, FSMLPstruct with β=10 beats all the methods including mutual information based method. Analyzing the performances over train and test sets, for E. coli data FSMLPstruct with β=10 is the winner among the other six models.
Tables <ref> and <ref> compare the performance of the proposed method with other methods in terms of different criteria for the Glass dataset on its training and test sets, respectively.
The chosen numbers of features for the Glass data are 4 and 5. The expected nature of decreasing SS with increasing β is clearly observed for Q=5 for both the training and test set. For Q=4, the Glass data also follows the characteristics of the E. coli data of having the same values for FSMLP and FSMLPstruct with β=0.1 in all the ten measures for both training and test set. For Q=4, from β=0.1 onwards, increasing βs produce decreasing SS values and increasing NMI, ARI, and JI values for both training and test datasets. We observe from the Tables <ref> and <ref>, for Q=5, as the β increases from 0 (FSMLP) to 0.1, and then to 1, NMI, ARI, and JI values are increased for both training and test datasets, however at β=10, NMI, ARI, and JI values are decreased compared to β=0.1 and 1. We can conclude that, for Q=4, FSMLPstruct with β=10 gives the best structure preserving performance among the considered models and for Q=5, FSMLPstruct with β=1 is best in structure preservation. In terms of the classification performance measure OCA, FSMLPstruct with β=10 and FSMLPstruct with β=1 show the highest OCA values for the training set and test set respectively, with Q=4. On the other hand, for Q=5, FSMLPstruct with β=10 show the highest OCA values for the training set and FSMLPstruct with β=1 show the highest OCA values for the test set. Inspecting all the performance measure values, we conclude that for the Glass dataset, both FSMLPstruct with β=10 and FSMLPstruct with β=1 are comparatively better in simultaneously preserving both class and cluster structures than the other methods.
The performances of the Ionosphere dataset are recorded in Tables <ref> and <ref> for training and test sets respectively.
For the Ionosphere data set, the number of selected features, Q is set as 12 and 17. Here, in all the cases, whenever the β is increasing, SS is decreasing and the other structure preserving indices NMI, ARI, and JI are increasing consistently. Unlike, E. coli and Glass data set, here when β increases from 0 (in FSMLP) to 0.1, the structure preserving metrics including SS shifted in the desired direction in most of the cases and remained the same in some cases. Except for SS, in the other three structure preserving measures, ICA and F score based method have performed better than FSMLP and FSMLPstruct for all the cases. Classification performance is good for almost all methods for the Ionosphere data set. In the training set, for both Q=12 and Q=17, an accuracy of 97.46% is reached by mutual info and F score based methods, however, FSMLP and FSMLPstruct models have reached more than 96% accuracy in every case. For the test set, all the structure preserving indices are better for FSMLP and FSMLPstruct than ICA, F score, and mutual information based methods, although in terms of classification score OCA, the F score and mutual information based methods have performed marginally better than FSMLP and FSMLPstruct models. This may have happened because the selected features from the neural network based classifier which are expected to be discriminatory features, may not be the best for SVM. Moreover, FSMLPstruct makes a compromise between preserving cluster structure and classifier loss. For the Ionosphere dataset, our proposed models are not the winner. May be with higher β, FSMLPstruct would deliver better scores.
For the Sonar data, the summary of the performances of the training and test data sets in terms of the five measures for two choices of the number of selected features are available in Tables <ref> and <ref>.
We set, Q=21 and 30 for the Sonar data set. In the case of the Sonar data set, not only with increasing β, all the structure preserving indices improve, in case of the training set, FSMLPstruct with β=10 are significantly better than ICA, F score, and mutual information based methods, and FSMLP in all five scores for both the choices of Q. In test set for some cases, FSMLPstruct with β=1 is better than FSMLPstruct with β=10. For the Sonar data set, clearly, the proposed method performed extremely well in terms of classification and clustering performance.
Tables <ref> and <ref> summarize the performances of the proposed method and other comparing methods for training and test sets, respectively for the AR10P data set. The original number of features, P for the AR10P data set is 2400, which is comparatively higher than that of the other two data sets used in this sub-section. The two choices of the number of selected features here are 40 and 60 and these are not approximately 35% and 50% of the original dimension like in previous cases.
The study in <cit.>, proposed a feature selection scheme for redundancy control in features. They reported an average number of selected features of 58.9 without practicing redundancy control and an average number of selected features in the range of 22.8 to 44.2 when practicing redundancy control for AR10P data set. Hence, we choose the number of selected features Q as 40 and 60. From the classification scores shown in Table <ref>, we note that for all the methods for both the choices of Q, classification scores in training set are more than 99%. In the training set, we observe that for FSMLPstruct as β increases SS is decreased in almost all the cases. But for the test set, this is not true. For the other structure-preserving measures for the training set, FSMLPstruct with β=50 is best among all the methods for Q=40 and FSMLPstruct with β=100 is best among all the methods for Q=60. In the test set, all the methods have performed almost the same in terms of the structure-preserving measures. The classification performances of FSMLPstruct are very poor in the test set for AR10P data. The significant differences in training and test OCA values for FSMLPstruct indicate poor generalization of the system. This problem may be addressed by choosing the number of nodes for our MLP based model through cross-validation.
Results from the five data sets clearly establish the benefit of introducing the proposed structure preserving regularizer term, E_sammons in the overall loss function (<ref>) of the MLP based embedded feature selection scheme. Next we shall consider the band (channel) selection problem for hyperspectral satelite images.
§.§ Band selection in hyperspectral images
Let our considered hyperspectral image I be of dimension, H× W× P where, H, W, and P are the height, width, and number of spectral bands of the image respectively. We can represent the pixels of I as 𝐱_i∈ℝ^P: i=1, 2, … ,H× W. Let, there be total n pixels annotated with C land cover classes. Without any loss of generality, we take the first n pixels, i.e., i=1,2, … n as the pixels having class labels. Our input data for land cover classification problem be 𝐗={𝐱_i=(x_i1,x_i2,⋯,x_iP) ∈ℝ^P}_i=1^n. The collection of class labels of 𝐗 be 𝐙={z_i∈{1,2,⋯, C}}_i=1^n, where, z_i is the class label corresponding to 𝐱_i. We aim to select a subset of size Q from the original set of bands such that the selected subset performs reasonably well for land cover classification as well as in clustering.
We have performed the experiments with three benchmark HSI datasets for land cover classification problems- Indian pines, Pavia University, and Salinas<cit.>. We have used the corrected version of the Indian pines and Salinas dataset having the number of bands 200 and 204 respectively.
The Pavia University dataset uses 103 bands. The pre-processing of the datasets is the same as done in <cit.>, following the code available in<cit.>. For any dataset, its pixel values are scaled to [0,1] using the expression (x-min(x))/(max(x)-min(x)), where, x is a pixel value. The max and min are computed over the entire HSI. The data are then mean normalized across each channel by subtracting channel-wise means. The datasets are partitioned into training and test datasets. For band selection, only the training datasets are fed to the model. For measuring performances both training and test datasets are used. For splitting the datasets into training and test subsets, we drop the pixels of the unknown land-cover type. Let 𝐗 be the set of pixels with known land-cover type. To obtain the training and test sets, let us divide 𝐗 into two subsets 𝐀 and 𝐁 such that 𝐀⋃𝐁=𝐗, 𝐀⋂𝐁=ϕ, and 𝐀 and 𝐁 contain, respectively, 25% and 75% pixels of 𝐗. We use 𝐀 as the test set. Note that, both the datasets suffer from the class imbalance problem. To avoid the learning difficulty raised by class imbalance, in the training set, we consider the same number of instances from each class. For this, from the subset 𝐁, we randomly select (without replacement) 200 pixels per class. If a class has less than 200 instances in 𝐁, we oversample the class by synthetic minority oversampling technique (SMOTE) <cit.> to gather 200 points. For band selection also, we use the same neural network (Fig. <ref>) with the number of hidden layers, n_H=3. The numbers of hidden nodes in the three hidden layers are 500, 350, and 150 respectively. Here the number of input nodes of the MLP is equal to the number of bands (P). The network weights and the gate parameters λ_js are initialized in the same way as done. For all experiments of the current sub-section, α_1 and α_2 of the error functions in Equations (<ref>) and (<ref>) are set as 5 and 1 respectively. The total number of iterations for training the network is set to 50000. The rest of experimental settings are kept same as the previously mentioned experiment with the two data sets. The number of training instances of Indian pines and Salinas data set
is 3200 and that of Pavia university is 1800. Both of the number of training instances, n are high. Computation of the E_sammons in (<ref>) would involve computing (3200)^2 or (1800)^2 distances. Adding E_sammons to the overall loss function would cause very intensive computation at each iteration. So, instead of E_sammons, its proposed approximation E_struct defined in (<ref>) is used. In (<ref>), |S_t| is taken as 100. Varying the value of β in (<ref>), we analyse its effect on the OCA, SS, NMI, ARI, and JI. We compute SS, NMI, ARI, and JI as described in Subsec. <ref>. We also use the same clustering algorithm with the same settings as used in Subsec. <ref>.
Tables <ref> and <ref> summarize the comparative results of FSMLPstruct with FSMLP and other band selection methods, ICA, F score, and mutual information based filter methods on the training and test datasets of Indian pines respectively.
Similarly, Table <ref> and <ref> summarize the comparative results on the training and test datasets of Pavia university.
In this experiment, we have fixed the number of selected bands Q, approximately to 35% of the original number of bands P. So, The number of selected bands is 70 for Indian pines and
it is 35 for Pavia University. Tables <ref> and <ref> record the values of the structure-preserving indices and classification scores on Indian pines for different β values in FSMLPstruct (β values in Equation (<ref>)). The considered βs for Indian pines,
are 2, 5, 20, and 50. Note here that, FSMLP is basically FSMLPstruct with β=0. We observe in Tables <ref> and <ref> that both for training and test datasets as the value of β increases for FSMLPstruct (in the last five rows of the corresponding Tables) the value of SS becomes smaller. A similar trend is also observed for the Pavia University data set (here, β varies as 1,1.5,2, and 2.5) for training (Table <ref>) and test (Table <ref>) sets.
For the Pavia university dataset, we have set the values of β in FSMLPstruct as 1,1.5,2, and 2.5. Unlike Indian pines
for Pavia university, we restrict the βs to lower values. This is due to the fact that the number of selected bands for Pavia university is 35 and that for Indian pines
is 70. Lesser the number of bands, the lesser the importance (β) to be given to our structure preserving regularizer in Equation (<ref>) to obtain a desired balance between classification and clustering performance. Table <ref> which contains the results for the Indian pines training data, clearly shows that both FSMLP and FSMLPstruct are better than ICA, F-score based, mutual information based methods in all four structure preserving metrics as well as in terms of the OCA. In Table <ref> we observe that with increasing values of β there is a consistent improvement in the values of the four structure preserving metrics while the values of OCAs retain approximately at 91%. The results shown in Table <ref> for the Indian pines test set also show that FSMLP and FSMLPstruct perform better in terms of all the five metrics than the other three methods. Also with an increase in β all the structure-preserving metrics improve for FSMLPstruct, except the value of JI slightly decreases when β goes to 50 from 20. The classification metric OCA is around 78% with bands selected by FSMLPstruct for different choices of βs.
It is notable here that the test set is completely unseen in the process of band selection, yet the selected bands for the proposed method is providing fairly good results for structure preservation as well as for classification. As observed from Table <ref> and Table <ref> for Pavia university training and test sets respectively, the lowest (best) SS value among all the comparing methods is achieved by mutual information based filter method. However, for the other four metrics i.e. NMI, ARI, JI and OCA; FSMLP and FSMLPstruct show better values. In the case of the Pavia university dataset with increasing β; NMI, ARI, and JI are not consistently increasing but the results indicate that, it is possible to find a β, (here β=2) where the structures are preserved better maintaining a good classification score.
Table <ref> and <ref> summarize the comparative results of training and test datasets of Salinas.
We note from Tables <ref> and <ref> that, for the Salinas dataset, FSMLP and FSMLPstruct are better than the other three methods in all the five metrics used. All four structure preserving metrics scores of FSMLPstruct are better than or comparable to FSMLP keeping the classification score OCA at approximately 96% for the training dataset and 90% for the test dataset. Tables <ref> and <ref> reveal that when β is increased from 0 to 2, the value of SS is increased however, from β=2 to β=50 onward, the values of SS are decreased. The exceptions for the Salinas dataset, while increasing β from 0 to 2 is possibly due to the fact that we do not use the entire training data in Equation (<ref>) and use of |S_t|=100 in Equation (<ref>) is not adequate to capture the structure of the data faithfully for the Salinas dataset. As discussed earlier, setting the value of |S_t| is crucial for approximating Equation (<ref>) with Equation (<ref>). We have set |S_t|=100 for all three datasets empirically. However, choosing an optimum value of |S_t| for each dataset is expected to avoid the occurred exceptions.
As we increase the value of β, there is more stress to reduce the loss function Equation (<ref>). In most cases, increasing β results in a drop in SS. This clearly suggests that the loss function Equation (<ref>) that we use, is a computationally efficient substitute for the original SS defined in Equation (<ref>).
We have included results of the thematic maps (Fig. <ref>) and it reveals that our proposed method is capable of selecting useful bands that can broadly capture the land cover types.
Figure <ref> illustrates thematic maps of the entire region captured in the Indian pines dataset. Figure <ref> shows ground truth labels. Figures <ref>, <ref>, <ref> are thematic maps of the Indian pines data set using the class labels obtained from the SVM classifier trained on the considered training set represented with 70 bands selected by FSMLPstruct with β=0, i.e., by the method FSMLP, and FSMLPstruct considering β=20, and β=50, respectively.
Figure <ref> ensures that even with the increasing stress on the structure-preserving regularizer E_struct, our proposed band selection method FSMLPstruct is able to select bands that maintain a good land cover classification performance.
§ CONCLUSION AND DISCUSSIONS
To the best of our knowledge, a feature selection method that simultaneously cares about class discrimination and structure preservation is not available in the literature. In this study, we have tried to bridge this gap by proposing a neural network-based feature selection method that focuses both on class discrimination and structure preservation. To learn the proposed system, we use Sammon's stress as a regularizer to the classification loss. For datasets having a large number of instances, the computational overhead associated with Sammon's stress is very high. Consequently, as the structure-preserving regularizer, we use Sammon's stress computed based on a sample of the original data (using dynamic sampling on each iteration during the adaptive gradient descent based learning). Using this regularizer in the experiments with datasets having a large number of instances, we have demonstrated that this regularizer is an effective and computationally efficient implementation of Sammon's stress based structure-preserving regularizer. Our proposed feature selection scheme is generic. So we have investigated its effectiveness on datasets commonly used for assessing classifiers as well as for a specialized case: band selection in hyperspectral images (HSI). We have applied the feature selection scheme to five real-world datasets which are commonly used typically for assessing classification. In the context of band selection, we have applied our method to three well-known HSI datasets and compared performances with three other band selection methods. Based on our experiments, we conclude that the proposed feature selection method is able to produce reasonably good classification and clustering scores in the majority of the data sets, proving that the proposed method is capable of selecting a subset of features that is good both for classification and clustering. Our scheme provides a mechanism to control the number of selected features. The proposed method is easily extendable to other networks like Radial Basis Function (RBF) network.
|
http://arxiv.org/abs/2307.05267v1 | 20230710152139 | Kibble-Zurek Mechanism for Nonequilibrium Generation of Magnetic Monopoles in Spin Ices | [
"Zhijie Fan",
"Adolfo del Campo",
"Gia-Wei Chern"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"astro-ph.CO",
"cond-mat.str-el"
] |
Department of Physics, University of Virginia, Charlottesville, VA 22904, USA
Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China
Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China
Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg, Luxembourg
Donostia International Physics Center, E-20018 San Sebastián, Spain
Department of Physics, University of Virginia, Charlottesville, VA 22904, USA
The proliferation of topological defects is a common out-of-equilibrium phenomenon when a system is driven into a phase of broken symmetry. The Kibble-Zurek mechanism (KZM) provides a theoretical framework for the critical dynamics and generation of topological defects in such scenarios. One of the early applications of KZM is the estimation of heavy magnetic monopoles left behind by the cosmological phase transitions in the early universe. The scarcity of such relic monopoles, which contradicts the prediction of KZM, is one of the main motivations for cosmological inflationary theories. On the other hand, magnetic monopoles as emergent quasi-particles have been observed in spin ices, a peculiar class of frustrated magnets that remain disordered at temperatures well below the energy scale of exchange interaction. Here we study the annihilation dynamics of magnetic monopoles when spin ice is cooled to zero temperature in a finite time. Through extensive Glauber dynamics simulations, we find that the density of residual monopole follows a power law dependence on the annealing rate. A kinetic reaction theory that precisely captures the annihilation process from Monte Carlo simulations is developed. We further show that the KZM can be generalized to describe the critical dynamics of spin ice, where the exponent of the power-law behavior is determined by the dynamic critical exponent z and the cooling protocol.
Kibble-Zurek Mechanism for Nonequilibrium Generation
of Magnetic Monopoles in Spin Ices
Gia-Wei Chern
August 12, 2023
==========================================================================================
The existence of a critical point has profound implications on the properties of a system, both in and out of equilibrium. In particular, crossing a continuous phase transition in a finite time leads to breaking adiabatic dynamics. As a result, topological defects proliferate in the driven system. In this context, the Kibble-Zurek mechanism (KZM) provides a reference theoretical framework for critical dynamics <cit.>. It unveils that the latter behavior is universal and characterized by scaling laws that govern the density of defects and the response time of the driven system. In particular, KZM has been employed to understand the formation of 't Hooft-Polyakov magnetic monopoles, a topological defect of non-abelian gauge theories, in the early universe <cit.>. The experimental absence of such fundamental magnetic monopoles led to the ideas of cosmological inflation <cit.>. On the other hand, condensed matter systems support various emergent topological defects and offer a fruitful arena for examining various aspects of KZM.
Universality away from equilibrium can be brought out by considering a system in which different phases of matter are accessible by varying an external control parameter λ (temperature, density, etc.) across a critical value λ_c. A continuous phase transition is characterized by a universal equilibrium scaling law of the correlation length ξ=ξ_0/|ϵ|^ν, where ϵ=(λ-λ_c)/λ_c and ν is the correlation-length critical exponent. Similarly, the equilibrium relaxation time diverges in the neighborhood of the critical point λ_c as τ=τ_0/|ϵ|^zν∼ξ^z, where z is the dynamic critical exponent. This divergence is known as critical slowing down and is responsible for breaking adiabaticity in any finite-time driven protocol λ(t). To appreciate this, it suffices to linearize λ(t) in the neighborhood of λ_c so that ϵ=t/τ_Q, assuming that the critical point is reached at t=0. The KZM predicts that the density of point-like defects in d spatial dimensions scales as n∼ξ̂^-D, where D is the spatial dimension, and ξ̂ is the non-equilibrium correlation length ξ̂=ξ_0(τ_Q/τ_0)^ν/1+zν which exhibits a power-law scaling with the quench time τ_Q that is fixed by the equilibrium critical exponents z and ν. An additional prediction of the KZM is that the characteristic response time, known as the freeze-out-time t̂, also scales universally with the quench time τ_Q as t̂=(τ_0τ_Q^zν)^1/1+zν. These predictions can alternatively be derived using finite-time scaling <cit.>.
The nonequilibrium critical behavior predicted by the KZM has been explored in depth in one-dimensional systems <cit.>. The spatial distribution of topological defects is then highly constrained, and exact analytical descriptions are often possible. Experimental evidence is convincing in the quantum domain <cit.> but remains limited in systems admitting a classical description <cit.>.
Results in higher spatial dimensions show a rich behavior. In theoretical and experimental studies, some settings are consistent with the scaling predictions dictated by the KZM <cit.>, while others display deviations <cit.>. The critical dynamics in systems with a complex vacuum manifold supporting different kinds of topological defects remains poorly understood, as coarsening and multiple channels for defect creation and annihilation can coexist <cit.>.
Spin-ice systems <cit.> are an unusual class of ferromagnet where the magnetic atoms reside on a pyrochlore lattice, a three-dimensional network of corner-sharing tetrahedra as shown in FIG. <ref>(a). For spin ice with interactions restricted to nearest neighbors, the magnet remains in a disordered state down to zero temperature. At first sight, the KZM is not expected to describe the annealing dynamics of such idealized spin ice, which shows no symmetry breaking. However, at temperatures below the energy scale of exchange interaction, spin ice exhibits novel fractionalized quasi-particles which carry a net magnetic charge, essentially behaving as magnetic monopoles <cit.>. Conservation of magnetic charges means that these quasi-particles have to be created and annihilated in pairs. Magnetic monopoles are thus topological defects in an otherwise disordered spin state, in contrast to topological defects due to broken symmetry as in standard KZ scenario. An intriguing question is whether these emergent magnetic monopoles in a quenched spin ice exhibit scaling behaviors and if the KZM can be generalized to describe their nonequilibrium dynamics.
The emergence of magnetic monopoles in spin ice is closely related to the ice rule, a local constraint for ground states. Dominant easy-axis anisotropy forces the magnetic moments to point in the local ⟨ 111 ⟩ directions, allowing us to express spins in terms of Ising variables: 𝐒_i = σ_i μ𝐞̂_i, where μ is the magnitude of the magnetic moment, 𝐞̂_i is the local crystal-field axis, and σ_i = ± 1 indicates the direction of the magnetic moment, which points either from the center of a tetrahedron to the corresponding corner or vice versa. Both the short-range ferromagnetic exchange J_F < 0 and the long-range dipolar interaction contribute to an effective nearest-neighbor antiferromagnetic interaction between the Ising spins ℋ = J ∑_⟨ ij ⟩σ_i σ_j, where J = 1/3 (|J_F| μ^2 + 5μ_0 μ^2 /4π a^3) is the effective antiferromagnetic interaction and a is the nearest-neighbor distance in pyrochlore lattice. We first focus on the annealing dynamics with interactions restricted to nearest neighbors and discuss effects of long-range dipolar interaction later.
It is convenient to express the spin-ice energy in terms of magnetic charges for understanding the ground-state properties and elementary excitations. To this end, we use the dumbbell approximation <cit.> to replace a magnetic moment 𝐒_i (a dipole) by two opposite magnetic charges ±μ/ ℓ at the two ends of a bar of length ℓ, which is set to be the distance between centers of two nearest-neighbor tetrahedra. The effective magnetic charge of a tetrahedron-α is then Q_α = ± (μ/ℓ) ∑_i ∈ασ_i, where the ± sign is used for tetrahedra of opposite orientations, and the sum is over the four spins of the tetrahedron. In terms of magnetic charges, the system energy becomes ℋ = v/2∑_α Q_α^2 up to an irrelevant constant, where the self-energy coefficient v = J ℓ^2/μ^2.
The total energy of a spin ice is thus minimized by any spin configurations with zero magnetic charges Q_α = 0 for all tetrahedra, which form a diamond lattice that is dual to the pyrochlore lattice. The charge neutral condition corresponds to a tetrahedron with two σ=+1 and two σ=-1 Ising spins, known as the 2-in-2-out ice rules <cit.>. While these constraints introduce strong short-range correlations between spins, no long-range order is induced even at zero temperature. The number of ground states satisfying the ice rules grows exponentially with the system size, giving rise to a zero-point entropy, which is well approximated by the Pauling estimate S_ Pauling = (1/2) log(3/2) and verified experimentally in canonical spin ice compounds <cit.>.
Elementary excitations above the hugely degenerate ground-state manifold are represented by tetrahedra that violate the ice rules <cit.>. These correspond to 3-in-1-out/1-in-3-out tetrahedra with a magnetic charge Q = ± q_m, or 4-in/4-out tetrahedra with charge Q = ± 2 q_m, where q_m = 2 μ/ℓ is the elementary unit of magnetic charges in spin ice. These defect tetrahedra, particle-like objects carrying net magnetic charges, are essentially magnetic monopoles. It is also worth noting that the monopoles in spin ice are topological defects as they have to be created and annihilated in pairs. For example, a single spin-flip, or an inverted dumbbell, results in two monopoles of charge Q = ± q_m on adjacent diamond-lattice sites. Crucially, the monopoles can be separated from one another without further violations of local neutrality by flipping a chain of adjacent dumbbells.
The vacuum of these emergent magnetic monopoles corresponds to the highly constrained ground-state manifold. It has been shown that an effective magnetostatic theory can describe this manifold. Indeed, monopole excitations are the source and sink of the emergent magnetic field 𝐁(𝐫). The ice rules, i.e., the absence of monopoles, translate to the divergence-free condition ∇·𝐁 = 0, which in turn gives rise to dipolar-like power-law spin correlations in the degenerate ground-state manifold <cit.>. The monopole density determines the correlation length ξ of this emergent critical state at T → 0: ξ∼ 1/n_m^1/3. As an activation energy Δ E_m = v/2 q_m^2 = 2J is required to create fundamental monopoles of charge ± q_m, the density of such topological defects n_m ∼ e^-2J/T is exponentially suppressed at low temperatures. This results in an equilibrium correlation length ξ∼ e^2J/3T, which diverges exponentially as T → 0, in contrast to the familiar power-law divergence when approaching a conventional critical point.
A similar exponential divergence of correlation length also occurs in the paradigmatic ferromagnetic 1D Ising model. Similar to spin ices, Ising spins remain disordered at any finite temperature. An unconventional critical point at T_c = 0 can be associated with the system, at which spins become fully polarized. The average distance between kink and anti-kink pairs, which are topological defects of an Ising chain, determines the correlation length. The fact that the number of kinks is suppressed at low-T similarly gives rise to a correlation length that grows exponentially as T → 0.
Moreover, the dynamical behavior of the 1D Ising model under the Glauber dynamics can be described by a solvable master equation <cit.>. Notably, the KZ scaling hypothesis has also been verified in the 1D Ising model when the system is slowly annealed to zero temperature <cit.>.
From the viewpoint of an unconventional critical point at T_c = 0, spin ices can be viewed as a different high-dimensional generalization of the 1D Ising chain, to be contrasted with the standard square or cubic Ising models. We note that a 2D analog of the pyrochlore spin ice is given by the antiferromagnetic Ising model on a checkerboard lattice, as shown in FIG. <ref>(b). An artificial version of such 2D spin ice has been realized in arrays of nanomagnets <cit.> and optical traps of soft-matter particles <cit.>. Despite the similarity, we note that while the Ising chain becomes long-range ordered at T=0, both spin ices remain disordered down to zero temperature when interactions are restricted to nearest neighbors. Here we show that KZM, with proper modification, can also be applied to the critical dynamics of spin ice and the annihilation of magnetic monopoles.
§.§ Annealing of spin ice with Glauber dynamics
To describe the nonequilibrium dynamics associated with a temperature quench, we perform Glauber dynamics <cit.> simulations of pyrochlore spin ice with time-dependent temperature T(t). To take into account the stochastic and local nature of the spin dynamics, at each fundamental step, a spin σ_i that is randomly chosen from the system is updated according to the transition probability w(σ_i → -σ_i) = 1/2 [1 - tanh(1/2βΔ E_i) ], where β = 1/T is inverse temperature and Δ E_i is the energy change due to the flipped spin. At low temperatures, a single-spin flip results in mostly either the creation/annihilation of monopole pairs, of which Δ E= ± 4J, or the movement of monopoles for which Δ E = 0. It is thus convenient to introduce a dimensionless parameter γ(t) = tanh[2β(t) J] which controls the transition rate. For example, ignoring the updates that involve double monopoles, the transition rate at low temperatures simplifies to w(σ_i → -σ_i; t) = 1/2 [1-γ(t) σ_i sign(h_i)], where h_i = ∑_j ∈ nn(i)σ_j is the sum of nearest-neighbor Ising spins, and sign(x) is the sign function.
In terms of this control parameter, we first consider the so-called linear cooling schedule: γ(t) = t / τ_Q, where τ_Q denotes the total annealing time <cit.>. With this cooling protocol, the system evolves from T = ∞ at t=0 to zero temperature when t = τ_Q.
The time is incremented by δ t = 1/N_s after each spin update attempt, where N_s = 16 L^3 is the total number of spins in the system. All simulations below were performed on a lattice of L = 10, with N_s =16,000 spins. After one Monte Carlo sweep of the entire system, the time increases by one unit of time Δ t = 1, and the charge statistics is measured. The cooling time varies in the range τ_Q = 10× 2^n with n=0, 1, 2, 3, …, 10. The final results are obtained by averaging the data from 10,000 randomly generated initial states.
The time dependence of the elementary-monopole density n_m(t) is shown in FIG. <ref> for algebraic cooling with α = 1 and 2, and varying cooling time τ_Q. As discussed above, these are 3-in/1-out or 1-in/3-out tetrahedra carrying a net charge Q=± q_m, Another type of defect tetrahedra with all spins pointing in or out can be viewed as a quasi-bound state of two fundamental monopoles of equal charges. The density n_2m(t) of such double monopoles as a function of time is shown in FIG. <ref>. As these quasi-bound states carry a doubled charge Q = ± 2 q_m, they are energetically more costly, giving rise to a density that is orders of magnitude smaller than that of monopoles. The critical dynamics of both types of quasi-particles exhibit a similar overall pattern: an initial slow decay that lasts a long time, followed by a very steep decline at the end of cooling.
To shed light on the annealing dynamics of spin ices, rate equations based on reaction kinetics theory are developed to describe the dynamical evolution of single and double monopoles. For example, the rate equation for magnetic monopoles of charges ± q_m at the late stage of the cooling is
d n_m/dt = 𝒜_0 + 𝒜_1 n_2m + 𝒜_2 n_2m^2 - ℬ n_m^2,
The first three 𝒜 terms denote the various mechanisms for producing ± q_m monopoles: pair-creation from vacuum, decay of a double monopole, and conversion of two double monopoles into fundamental monopoles. The last term accounts for the pair annihilation of ± q_m monopoles. It is worth noting that the leading decay term is quadratic in n_m (no linear term) is a manifestation of their topological nature.
Through reaction kinetic theory, the coefficient ℬ is uniquely related to the three 𝒜 coefficients, which will be treated as fitting parameters. In practice, these parameters are determined from Glauber dynamics simulations with a small τ_Q =160. The rate equation for the higher-energy double monopoles n_2m can be similarly obtained; see Appendix B for details. Using random spins to set initial conditions, the rate equations are integrated numerically. The results are shown in FIG. <ref> as solid lines. Remarkably, the reaction kinetics based on exactly the same set of parameters gives an excellent overall agreement with the Glauber dynamics simulations for both linear and algebraic α = 2 cooling schedules.
§.§ Kibble-Zurek mechanism for monopoles
Both Monte Carlo simulations and calculations using the rate equations yield a power-law dependence on τ_Q for the residual monopole density at the end of cooling n_m(τ_Q) ∼τ_Q^-μ, where the KZ exponent is μ≈ 0.33 and 0.5 for algebraic cooling with α = 1 and 2, respectively.
Here we show that these scaling behaviors can be explained by a generalized KZM similar to that adopted for the 1D Ising chain <cit.>.
As discussed above, although spin-ice systems exhibit a critical point at T_c =0, the correlation length diverges exponentially ξ∼ n_m^-1/D∼ e^Δ E_m/DT, instead of algebraically as in a conventional critical point. Here the spatial dimension D = 2 and 3 for the checkerboard and pyrochlore spin ice, respectively, and the activation energy Δ E_m = 2J for both cases. On the other hand, the relaxation time τ, which is closely related to the annihilation rate of monopoles, also diverges exponentially as T → 0 <cit.>. Consequently, one can still define a dynamical exponent that relates these two exponentially divergent quantities: τ∼ξ^z.
The relaxation time is shown to follow the Arrhenius law: τ∼ e^Δ E_m/T∼ e^2J/T for spin ice with nearest-neighbor interaction <cit.>. The exponential divergence of the relaxation time has also been explicitly verified from the decay of monopoles in instant-quench simulations; see Appendix A for details. This gives rise to a dynamical exponent z = D for the relaxation of spin ice, which is also explicitly confirmed in our quench simulations. Finally, we note in passing that, despite the similarity between the Ising chain and spin ices, the dynamical exponent for kinks is z = 2 in the 1D Ising model <cit.>.
Central to the KZM is the freeze-out time t̂, measured from the critical point, which signifies the breaking of adiabaticity. Before the freeze-out time during the cooling, the system can reach the quasi-equilibrium state of the instantaneous temperature T(t) due to a short relaxation time τ(T) at the corresponding temperature. Freezing of the system occurs when the exponentially increasing relaxation time is comparable to the time left before reaching the critical point at T_c = 0, i.e.
τ̂ = τ( T(τ_Q - t̂) ) = t̂.
For time t ≳τ_Q - t̂, breaking adiabaticity means that the pair-annihilation of topological defects is suppressed. The number of monopoles at the end of annealing can thus be well approximated by that at the freeze-out time n_m(τ_Q) ∼ n_m(τ_Q - t̂).
Here we demonstrate the determination of t̂ for the general algebraic cooling schedule
1-γ(t) = A (1 - t/τ_Q)^α,
when t →τ_Q. Here A >0 is a positive constant. The linear cooling schedule corresponds to α = 1. Substituting the resultant time-dependent temperature T(t) into Eq. (<ref>), we have t̂ = exp{tanh^-1[1-A(t̂/τ_Q)^α]}. Assuming slow cooling such that τ_Q ≫t̂, we expand the right-hand side of this equation to leading order in t̂/τ_Q, and obtain a scaling relation
t̂∼τ_Q^α / (2 + α).
The residual density of monopoles can then be estimated from the correlation length at the freeze-out time, i.e., n_m(τ_Q) ∼ξ̂^-D∼τ̂^-D/z. Remarkably, the fact that the dynamical exponent is given by the dimension of spin ice z = D means that n_m(τ_Q) ∼τ̂^-1, independent of the dimension. Combining the KZ condition (<ref>) and the scaling of freeze-out time in Eq. (<ref>), we obtain a power-law dependence
n_m(τ_Q) ∼τ_Q^-α / (2 + α),
which is independent of spatial dimensions. For α = 1 and 2, the above formula gives a KZ exponent μ = 1/3 and 1/2, consistent with our numerical results shown in FIG. <ref>. Notably, the monopole densities computed at the freeze-out time and at the end of the cooling exhibit the same power-law behavior. Moreover, we have explicitly verified numerically that the same exponents also apply to the 2D checkerboard spin ice subject to algebraic cooling schedules.
§.§ Residual density of double monopoles
The double-monopole density n_2m(t) obtained from Glauber dynamics simulations is shown in FIG. <ref> as a function of time. Again, the simulation results are well captured by the rate equations. While the double monopoles seem to also exhibit power-law behavior both at the freeze-out time and at the end of cooling, the two exponents are different contrary to the case of single monopoles. It is worth noting that the double-monopoles are not topological defects as they can spontaneously decay into two fundamental monopoles. As a result, there is no freezing for the annihilation of double monopoles. However, we can still estimate the density of double monopoles at the freeze-out time t = τ_Q - t̂. As the activation energy of such defects is Δ E_2m = v/2 (2 q_m)^2 = 8J, their equilibrium density scales as n_2m∼ e^- 8 J/T. Since the relaxation time τ∼ e^2 J/T in the adiabatic regime, we have n_2m∼τ^-4. Using the KZ condition (<ref>) that the relaxation time at the freeze-out instant is τ̂ = t̂, the density of double monopole at the freeze-out time is
n_2m(τ_Q - t̂) ∼τ_Q^-4α/(2+α).
This power law agrees very well with the numerical results for both linear cooling and algebraic cooling with α = 2; see FIG. <ref>.
However, as discussed above, since double monopoles are non-topological, they will continue to decay even after the freeze-out instant. Their relaxation in this regime is governed by a rate equation
dn_2m/dt = 3 n_m^2/16 τ_2m e^-4β(t) J - n_2m/τ_2m,
where the temperature-independent τ_2m is the intrinsic lifetime of the double monopole. The first term above describes the combination reaction of two same charge monopoles into a double monopole. The reverse process, corresponding to the second term above, is the dominant contribution to the decay of double monopoles. In this freeze-out regime, the density of fundamental monopoles can be approximated by its value at the freeze-out instant. The depletion of n_m due to the recombination is negligible due to the small exponential factor e^-4β J at very low temperatures. Assuming a short decay time of double monopoles τ_2m≪t̂, the rate equation for the case of algebraic cooling can be integrated to give a residual density
n_2m(τ_Q) ∼τ_Q^-(4α + α^2)/(2 + α).
Details of the derivation is presented in Appendix C. This power law dependence is confirmed by both Glauber dynamics simulations and rate equation, as shown in FIG. <ref>.
§.§ Dynamical scaling
The freeze-out time t̂ and the associated correlation length of KZM also provide a basis for dynamically scaling the nonequilibrium behavior during cooling <cit.>. In particular, here we consider the time-dependent excess monopole density defined as δ n_m(t) = n_m(t) - n_m^ (eq)(t), which represents a genuine nonequilibrium part of the defect density. Here the quasi-equilibrium monopole density is given by the Boltzmann distribution at the instantaneous temperature n_m^ (eq)(t) ∼exp[-β(t) Δ E_m] with the degeneracy factor adequately taken into account. The excess monopole density as a function of time is shown in the inset of FIG. <ref>(a) for various cooling rates. The density of excess monopoles becomes non-zero immediately after the cooling, yet remains rather small, of the order of δ n_m ∼ 10^-3, in the initial quasi-adiabatic regime. During this period, the number of excess monopoles increases gradually until the freeze-out time, which is marked by the abrupt, rapid growth of δ n_m. At the end of cooling, when the system reaches zero temperature and n^ (eq)_m = 0, the density of excess defects exhibits the same scaling δ n_m ∼τ_Q^-1/3 as shown by the dashed line.
It is worth noting that the relevant time scale that determines the evolution of the quenched system is the freeze-out time instead of the annealing time τ_Q. The dynamical scaling posits that the density of excess monopoles, normalized by the density of residual defects at the end of cooling, is a universal function of the time left before reaching the critical point, rescaled by the freeze-out time
δ n_m(t) = δ n_m(τ_Q) ℱ(t-τ_Q/t̂).
The critical point t = τ_Q corresponds to ℱ(0) = 1. As shown in FIG. <ref>(b), the rescaled data points from our Glauber dynamics simulations collapse on a universal curve, underscoring a universal nonequilibrium dynamical behavior.
§.§ Exponential cooling protocol
Since either the Glauber or Metropolis dynamics for Ising spins is controlled by the Arrhenius factor e^-4β J, it is natural to define cooling schedules in terms of the dimensionless parameter γ(t). The algebraic cooling protocol (<ref>) corresponds to a physical temperature which vanishes in such a way that its inverse diverges logarithmically T(t) ≈ 4J/α |log(τ_Q - t)| near t = τ_Q. To investigate the annealing dynamics with a linearly decreasing temperature T(t) = T_0 (1-t/τ_Q), we consider cooling procedures where the γ parameter is described by an exponential function <cit.>
1 - γ(t) = B exp{ - b/(1-t/τ_Q)^α},
where b, α > 0 are positive parameters and B = exp(b) is a normalization factor ensuring γ(0) = 0 and γ(τ_Q) = 1. The case of a linearly decreasing temperature corresponds to α = 1.
The monopole density as a function of time is shown in FIG. <ref> for exponential cooling schedule with α = 1 and 3. The dynamical evolution is again well captured by the rate equations. Similar to the case of algebraic cooling, the relaxation of magnetic monopoles is characterized by a slow decay for most of the cooling schedule, followed by an abrupt drop at the late stage. Yet, the relaxation shows a slight deceleration roughly after the freeze-out time scale, to be discussed below. This late-stage slowdown is particularly prominent in the case of α = 3.
The residual monopole density at t = τ_Q again exhibits a power-law dependence on the cooling rate. Here we apply the KZM to understand this scaling relation. First, we substitute γ(t) of the exponential cooling procedure into the KZ condition (<ref>), the resultant transcendental equation t̂ = exp{tanh^-1[1 -B exp(-b/(t̂/τ_Q)^α) ] } in the slow cooling limit can be simplified to give a freeze-out time t̂∼τ_Q (lnτ_Q)^-1/α. Using the scaling relation n_m(τ_Q) ∼τ̂^-1∼t̂^-1 discussed previously, we obtain a universal 1/τ_Q power-law relation for the residual monopole density with a logarithmic correction that depends on the parameter α:
n_m(τ_Q) ∼τ_Q^-1ln(τ_Q)^1/α.
As shown in FIG. <ref>, the numerical results agree reasonably well with this KZM prediction.
§.§ Effect of long-range dipolar interaction
In pyrochlore spin-ice compounds, such as Dy_2Ti_2O_7 and Ho_2Ti_2O_7, the rare-earth ions carry a moment of 10 Bohr magnetons, μ≈ 10 μ_B. Long-range dipolar interaction plays a role of equal significance to the nearest-neighbor exchange. As discussed above, the dipolar interaction contributes to the effective nearest-neighbor coupling J between the Ising spins. The dipolar term is expected to slightly modify the activation energy of monopoles Δ E_m. Yet, the long-range Coulomb interaction between magnetic monopoles enhances the critical slowing down <cit.>. This enhancement can be attributed to the formation of locally bound pairs of monopoles, which hinders their diffusive motion <cit.>. As a result, the Arrhenius law cannot account for the entire low temperature relaxation time, including the intermediate quasi-plateau region (below 12 K) and the sharp upturn below 2 K. Nonetheless, the rapid increase of the relaxation time at very low temperatures, which is most relevant for the freezing in KZ scenario, can be approximated by a single exponential τ(T) = τ_0 exp(Δε / T) with an effective barrier energy Δε.
For convenience, we introduce a dimensionless parameter λ = Δ E_m / Δε. The enhanced critical slowdown indicates λ < 1. The equilibrium monopole density is then n_m ∼ξ^-D∼τ^- λ, which implies an effective dynamical exponent is z = D/ λ. Here we consider the effects of dipolar interaction in the case of algebraic cooling schedule Eq. (<ref>). Using the KZ condition (<ref>) to determine the freeze-out time t̂ and the corresponding relaxation time τ̂, the residual monopole density is found to follow a modified scaling relation
n_m(τ_Q) ∼τ_Q^-αλ/(α + 2 λ).
Although the correction caused by the dipolar interaction can be verified using the Glauber dynamics of Ising spins, large-scale simulations would be rather difficult due to the long-range dipolar term. A more feasible approach is to perform quench dynamics of a Coulomb gas of magnetic monopoles moving in a network of Dirac strings on the diamond lattice <cit.> and will be left for further study.
§.§ DISCUSSION AND OUTLOOK
A closely related system is the 2D kagome spin ice where the Ising spins reside on a network of corner-sharing triangles <cit.>. Since there are three spins in a basic triangle simplex, the ground-state manifold is governed by the 3-in-1-out or 1-in-3-out pseudo-ice rules, giving rise to a non-zero magnetic charge at every triangle. Elementary excitations, corresponding to 3-in or 3-out triangles, are not topological since they can decay into the minimum charge state by shedding the extra charge to its neighbor. The charge defects in kagome are similar to the double monopoles in pyrochlore spin ice. Moreover, while spins in the low-temperature ice phase are characterized by strong correlation, there is no emergent critical point at T = 0. As a result, for general cooling schedules, the residual charge defects exhibit a non-power-law dependence on the cooling rate <cit.>.
The KZ mechanism has previously been investigated in artificial colloidal version of the 2D spin ice with optical traps arranged in a square lattice <cit.>. Contrary to the ideal 2D checkerboard spin ice, the planar geometry breaks the degeneracy of the six ice-rule-obeying vertices, leading to a long-range order with staggered arrangement of the two lower-energy symmetric 2-in-2-out vertices. Although a power-law behavior of defect vertices was observed in the Langevin dynamics simulation, the obtained exponent is inconsistent with the prediction of KZM for the expected 2D Ising universality class. We believe the discrepancy could be attributed to the fact that charge defects, such as magnetic monopoles, are not necessarily associated with the Ising ordering as demonstrated in our work. On the other hand, It has been shown that the field-induced liquid-gas transition of magnetic monopoles in pyrochlore spin ice exhibits a dynamical KZ scaling of the 3D Ising universality class <cit.>.
Our results have firmly established the universal nonequilibrium generation of magnetic monopoles spin ices under slow cooling. Despite the absence of broken symmetries at low temperatures, pyrochlore spin ice and its 2D counterpart exhibit an unconventional critical point at T_c = 0. The correlation length of the highly correlated ice phase at low temperatures is controlled by emergent magnetic monopoles, which are topological defects that violate the two-in-two-out ice rules. Universal scaling relations of residual monopoles predicted by the Kibble-Zurek mechanism are confirmed by Glauber dynamics simulations as well as reaction kinetic theory. Our work opens a new avenue to the study of universal annealing dynamics of topological defects in other highly constrained systems.
§ APPENDIX A: RELAXATION TIME OF SPIN ICE
We employ the Glauber dynamics method to simulate instant thermal quench of nearest-neighbor pyrochlore spin ice. The relaxation time of the system can be obtained from the decay of magnetic monopoles after the quench. The simulated system consists of 10^3 cubic unit cells with N = 16 × 10^3 Ising spins. All data points are averaged over 8000 randomly generated initial configurations. We quench the system from infinite temperature to a low temperature T < J at time t = 0.
The averaged monopole density n_m(t) as a function of time is shown in FIG. <ref>(a) for various final temperatures. At large t, the time evolution can be well approximated by an exponential decay
δ n_m(t) = δ n_m(0) exp[-t/τ(T)],
where δ n_m = n_m - n^ eq_m(T) is the density of excess monopoles, and τ(T) is a temperature-dependent relaxation time. The extracted relaxation time is shown in FIG. <ref>(b) as a function of the inverse temperature. The agreement with the straight line, corresponding to 0.338exp(2.00(3) J /T), in the semi-log plot shows that the relaxation time can be well approximated by an Arrhenius law with a barrier energy of 2 J. As discussed in the main text, this result implies that the dynamical exponent z = D for nearest-neighbor spin ices.
§ APPENDIX B: REACTION KINETICS & RATE EQUATIONS
The reaction kinetics theory in chemistry is adopted to describe the dynamical evolution of the monopoles and double monopoles in spin ices. The basic idea is to describe the time evolution in terms of the number densities of different tetrahedra in a mean-field sense. For convenience, we also borrow terms from chemical reaction theory and use species to refer to tetrahedra of different charges. In pyrochlore spin ice, there are six different species which can be classified into three types. (i) ice-rule obeying tetrahedra with zero net charge; their density is denoted as n_0. (ii) 3-in-1-out and 1-in-3-out tetrahedra corresponding to magnetic monopoles with charge ± q_m. Their density is denoted as n_± 1, respectively. (iii) all-in and all-out tetrahedra with magnetic charges Q = ± 2 q_m; these are double monopoles with density n_±2.
Assuming that the magnet remains spatially homogeneous during relaxation, rate equations are employed to describe the “chemical reactions" of different tetrahedron species. The four different reactions caused by a single spin-flip are summarized in FIG. <ref>. This first type, shown in FIG. <ref>(a), describes the pair-annihilation and creation of magnetic monopoles Q=± q_m. The second reaction shows the annihilation of a double monopole with a single monopole of opposite charge. The third one corresponds to the conversion between a pair of monopoles and a pair of double monopoles. Finally, FIG. <ref>(d) depicts the decay of a double monopole into a pair of opposite-charge fundamental monopoles. It is worth noting that single spin-flip with Δ E = 0 is not listed here, as such update corresponds to the diffusive motion of fundamental monopoles.
Next we consider the transition kinetics of a general reaction
Q_A + Q_B ⇌ Q_C + Q_D,
where Q_A, Q_B are the initial reactants, and Q_C, Q_D are the final products. The two-way harpoon indicates that the reaction can occur in both forward and reversed directions. We note that, as these reactions are due to a flipping of magnetic dipole, the total charge is conserved.
It is convenient to choose the forward direction as the one that lowers the total energy, i.e., Δ E < 0.
In other words, the forward reaction is the decay or the annihilation of magnetic charges, while the reversed reaction is the excitation of magnetic charges.
The rate of a reaction is proportional to the densities of the reactants. For example, the transition rate of forward reaction for Eq. (<ref>) is v_+ ∝ n_Q_A n_Q_B. The net rate of reaction in the forward direction is then
v = k_+ n_Q_A n_Q_B - k_-1n_Q_C n_Q_D,
where n_Q is the density of tetrahedron with charge Q, and k_± denote the reaction coefficients of forward/reversed reactions, respectively.
These reaction coefficients, however, are not independent. When the system reaches equilibrium, the net change is zero v = 0, which in turn means k_+/k_- = n^ eq_Q_C n^ eq_Q_D / n^ eq_Q_A n^ eq_Q_B.
The equilibrium densities of the various species are given by the Boltzmann distribution, n^ eq_Q = g_Q e^-β E_Q / Z, where Z is the partition function, E_Q is the energy of charge species Q, and g_Q is its degeneracy. We thus have
k_+/k_- = g_Q_C g_Q_D/g_Q_A g_Q_Be^-βΔ E,
where Δ E is the energy difference between products and reactants. In general, the reaction coefficients k_± can be expressed as
k_± = A_± e^-βε_±,
where ε_± are the activation energies for the forward/backward reactions, respectively. In chemical reactions that often involve an intermediate state, these energy barriers are the energy differences between the intermediate state and the initial/final state, respectively. The coefficients A_± are now nearly temperature independent. Let E^* be the energy of the intermediate state, we have ε_+ = E^* - (E_Q_A + E_Q_B) and ε_- = E^* - (E_Q_C + E_Q_D). Substitute Eq. (<ref>) into the ration in Eq. (<ref>), and using the fact that ε_+ - ε_- = Δ E, we obtain the ratio between the two pre-factors
A_+/A_- = g_Q_C g_Q_D/g_Q_A g_Q_B.
The overall reaction rate, and in particular its temperature dependence, naturally also depends on the energy level E^* of the intermediate state. However, for Ising spins with Glauber dynamics, the transition rate only depends on the energy difference Δ E, which does not involve any intermediate state. Or equivalently, the initial state with higher energy serves as such intermediate, hence ε_+=0 and ε_- = |Δ E|.
With these simplifications, there is only one independent parameter, e.g., A_-, for the determination of the net reaction rate
v = A_-( g_Q_C g_Q_D/g_Q_A g_Q_B n_Q_A n_Q_B - e^-β|Δ E| n_Q_C n_Q_D).
When a charged species is involved in multiple reactions, the rate equation of its density should include contributions from all possible reactions.
For charge species involved in multiple reactions simultaneously, their rate equation should include the contributions of every reaction
dn_Q/dt = ∑_m (r_m,Q - s_m,Q) v_m,
where v_m is the rate of the m-th reaction, r_m,Q and s_m,Q are the stoichiometric coefficients species-Q in the reactants and products, respectively, of the m-th reaction.
To further simplify the rate equation, we utilize the charge symmetry of the spin ice system, assume that the charge densities of species with opposite signs are the same, and define defect densities of the system, n_m = (n_+1 + n_-1) and n_2m = (n_+2 + n_-2). And the density of background tetrahedra satisfying the ice rules is n_0 = 1 - n_m - n_2m.
Based on the possible reactions and the properties of charges in pyrochlore spin ice, we can see that the densities of charge defects satisfy the following ordinary differential equations,
dn_m/dt = A_1(2 e^-4β J n_0^2 - 9/8 n_m^2 )
- A_3(1/2 e^-12β J n_m^2 - 8 n_2m^2 )
- A_4 (e^-4β J n_m^2 - 16/3 n_0 n_2m)
dn_2m/dt = A_2 (e^-8β J n_0 n_m - 3 n_m n_2m)
+ A_3(1/2 e^-12β J n_m^2 - 8 n_2m^2 )
+ A_4(1/2e^-4β J n_m^2 - 8/3 n_0 n_2m)
where β(T) = 1/T(t) is the time-dependent inverse temperature. The four coefficients, A_1, ⋯, A_4 describe the overall reaction rates of the four reaction processes in FIG. <ref>(a)–(d), and are obtained by fitting with the Glauber dynamics simulations. For the quench simulations, the initially random spins at T = ∞ corresponds to initial conditions n_m(0)=1/2 and n_2m(0)=1/8.
The rate equation (<ref>) for magnetic monopoles at low temperatures is obtained from Eq. (<ref>) using the approximations n_0 = 1-n_m - n_2m≈ 1. The various coefficients there are 𝒜_0 = 2 A_1 e^-4β, 𝒜_1 = 16 A_4/3, 𝒜_2 = 8 A_3, and ℬ = 9A_1/8 + A_3 e^-12β J/2 + A_4 e^-4β J.
§.§ Appendix C: Asymptotic solution for residual double monopoles
At low temperatures after the freeze-out time, the rate equation for double monopole is dominated by the A_4 term, corresponding to the reaction shown in FIG. <ref>(d). This is justified mathematically by the fact that e^-4β J≫ e^-8 β J≫ e^-12β J for the source term, and n_0 ≫ n_m ≫ n_2m. The resultant rate equation is shown in Eq. (<ref>) in the main text with the intrinsic lifetime of double monopole given by τ_2m = 3/(8A_4). For convenience, let t_* = τ_Q - t̂ be the freeze-out moment during the cooling. The monopole density remains roughly a constant n̂_m = n_m(t_*) ≈ n_m(τ_Q) for time interval t_* < t < τ_Q. The rate equation then becomes
dn_2m/dt = 3 n̂_m^2/16 τ_2m e^-4β(t) J - n_2m/τ_2m,
Integrating this equation from t_* to t gives
n_2m(t) = n_2m(t_*) e^-(t-t_*)/τ_2m
+ 3 n̂_m^2/16 τ_2m∫_t_*^t e^-4β(s) e^-(t-s)/τ_2m ds.
Here the density of double monopole n_2m(t_*) at the freeze-out instant is given in Eq. (<ref>). Assuming a fast decay of double monopoles τ_2m≪t̂, the exponential factor of the first term at t = τ_Q is then negligible e^-t̂/τ_2m≪ 1. To evaluate the remaining integral, we introduce a change of variable η = (τ_Q - s)/τ_2m and express the Arrhenius factor e^-4β J in terms of the dimensionless γ. The residual density at the end of cooling becomes
n_2m(τ_Q) = 3 n̂_m^2/16∫_0^t̂ / τ_2m1-γ(η)/1+γ(η) e^-η dη.
In the low temperature regime at the end of cooling, γ≈ 1, we can approximate the denominator by 1+γ≈ 2. Substituting the algebraic cooling protocol (<ref>) for 1-γ, we obtain
n_2m(τ_Q) = 3 A n̂_m^2 /16(τ_2m/τ_Q)^α∫_0^t̂/τ_2mη^α e^-η dη.
The remaining integral can be readily evaluated in the τ_2m≫t̂ limit:
n_2m(τ_Q) = 3 A Γ(1+α) n̂_m^2 /16(τ_2m/τ_Q)^α.
where Γ(x) is the Gamma function. Using scaling relation (<ref>) for the residual monopole density n̂_m, the above equation leads to the power-law behavior in Eq. (<ref>).
Acknowledgments. GWC is partially supported by the US Department of Energy Basic Energy Sciences under Contract No. DE-SC0020330. The authors also acknowledge the support of Research Computing at the University of Virginia.
|
http://arxiv.org/abs/2307.05449v1 | 20230711172327 | On the hull and complementarity of one generator quasi-cyclic codes and four-circulant codes | [
"Zohreh Aliabadi",
"Cem Güneri",
"Tekgül Kalaycı"
] | cs.IT | [
"cs.IT",
"math.IT",
"94B05 94B15",
"H.1.1"
] |
plain
theoremTheorem[section]
lemma[theorem]Lemma
corollary[theorem]Corollary
proposition[theorem]Proposition
question[theorem]Question
definition
propProposition[theorem]
obsObservation[theorem]
remark[theorem]Remark
corCorollary[theorem]
example[theorem]Example
definition[theorem]Definition
[#1]#2
footnote[#2]
|
http://arxiv.org/abs/2307.04803v2 | 20230710180019 | The irreversible relaxation of inflation | [
"Robert Alicki",
"Gabriela Barenboim",
"Alejandro Jenkins"
] | gr-qc | [
"gr-qc",
"astro-ph.CO",
"hep-ph",
"hep-th"
] |
#1#1:
Tr
tr
|
http://arxiv.org/abs/2307.05025v1 | 20230711055820 | Unleashing the Potential of Regularization Strategies in Learning with Noisy Labels | [
"Hui Kang",
"Sheng Liu",
"Huaxi Huang",
"Jun Yu",
"Bo Han",
"Dadong Wang",
"Tongliang Liu"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
[
[
August 12, 2023
===================
In recent years, research on learning with noisy labels has focused on devising novel algorithms that can achieve robustness to noisy training labels while generalizing to clean data. These algorithms often incorporate sophisticated techniques, such as noise modeling, label correction, and co-training. In this study, we demonstrate that a simple baseline using cross-entropy loss, combined with widely used regularization strategies like learning rate decay, model weights average, and data augmentations, can outperform state-of-the-art methods. Our findings suggest that employing a combination of regularization strategies can be more effective than intricate algorithms in tackling the challenges of learning with noisy labels. While some of these regularization strategies have been utilized in previous noisy label learning research, their full potential has not been thoroughly explored. Our results encourage a reevaluation of benchmarks for learning with noisy labels and prompt reconsideration of the role of specialized learning algorithms designed for training with noisy labels.
§ INTRODUCTION
Deep neural networks (DNNs) have become an essential tool for supervised learning tasks, and achieved remarkable progress in a wide range of pattern recognition tasks <cit.>. These models tend to be trained on large curated datasets with high-quality annotations. Unfortunately, in many real-world applications such datasets are not available. However, datasets with lower-quality annotations, obtained from search engines or web crawlers <cit.>, may be available. When trained on these datasets, DNNs tend to overfit to the noisy labels, a consequence of overparameterization. This overfitting subsequently hampers DNNs' ability to generalize effectively, undermining their overall performance.
Recently, significant advances <cit.> have been made to tackle the label noise problem using the ideas of modeling noise-label transition matrix <cit.>, label correction/refurbishment <cit.>, co-training <cit.>, etc. These works often proceed with different assumptions regarding noise distributions or present sophisticated techniques to refurbish labels. Some other works are devoted to designing loss functions <cit.> and regularizations <cit.> to prevent overfitting to label noise, and show better generalization ability. It is worth noting that almost all of these methods have adopted some de facto techniques for training modern neural networks e.g. learning rate decay, data augmentations, weight decay, etc. Focusing on tasks with noisy labels, a question that is naturally raised is how effective de facto regularization strategies are in achieving robustness to label noise?
In this paper, we propose an extremely simple baseline that suggests de facto regularization techniques can be more powerful for noisy classification tasks than the current crop of complicated algorithms dedicated to label noise. Our baseline consists of a set of regularization strategies. First, we use a sharp learning rate decay, where an initially large learning rate is used to avoid fitting noisy data, followed by a sharp decay to learn more complex patterns. Second, we adopt data augmentations with stronger transformation policies, including transformations such as cutout <cit.>, grayscale, color jitter, and Gaussian blur, etc. Third, we average model weights to smooth out variations during training.
Specifically, employing the three aforementioned regularization strategies, we train a DNN model using a standard cross-entropy loss with stochastic gradient descent (SGD) on noisily labeled datasets. We refer to this baseline method as Regularized Cross-Entropy (RegCE). Figure <ref> provides an overview of the proposed RegCE. RegCE offers a contrast to current state-of-the-art methods <cit.>, which often involve more complex mechanisms.
It is worth noting that current prevailing methods frequently adopt semi-supervised learning techniques to enhance performance. Therefore, we investigate the potential benefits of incorporating semi-supervised learning techniques into our method. In this exploration, we classify confident samples as labeled data and the remaining samples as unlabeled data, aligning with the practices employed by current state-of-the-art models.
In summary, our main contributions are as follows:
* We propose a surprisingly simple baseline for learning with noisy labels, which achieves the state-of-the-art. This baseline suggests that many recent noisy label robust algorithms are no better than simply adopting multiple de facto regularization strategies together.
* We show that the simple baseline is a good base model for which the performance can be further improved by semi-supervised learning techniques.
* We support our findings with extensive empirical results on a variety of datasets with synthetic and real-world label noise.
§ RELATED WORK
Learning with Noisy Labels. Recent advances in training with noisy label use varying strategies of noise-modeling and noise-modeling free approaches.
Model-based methods <cit.> strive to establish the relationships between noisy and clean labels, based on the assumption that the noisy label originates from a conditional probability distribution over the true labels. As a result, the primary goal of these methods is to estimate the underlying noise transition probabilities. Ref. <cit.> employed a noise adaptation layer on top of a classification model to learn the transition probabilities. T-revision <cit.> introduced fine-tuned slack variables to estimate the noise transition matrix without anchor points. Additionally, Ref. <cit.> proposed modeling label noise using a sparse over-parameterized term. These methods often assume certain characteristics about the noisy label distribution which may not hold for real-world data.
In contrast to directly modeling the noisy labels, noise-modeling free methods <cit.> aim to leverage the memorization effect of deep models to mitigate the negative impact of the noisy labels. Co-teaching <cit.> employs two deep networks to train each other using small-loss instances in mini-batches. ELR <cit.> proposes regularizing training using model outputs at the early stage of training. PES <cit.> selects different early stopping strategies for various layers of the deep model. Moreover, PADDLES <cit.> proposes early stop at different stages for phase and amplitude spectrums of features. Despite the effectiveness of these approaches, they are often equipped with complex training steps such as two-network training, or dedicated methodology design for training with label noise.
Regularization Techniques for Training Modern Neural Networks. When training deep neural networks, it is often important to control overfitting with the helps from different forms of regularization techniques. Regularization can be implicit and explicit. Explicit regularization techniques, such as dropout <cit.> and weight decay <cit.>, reduce the effective capacity of
the model. When noise is present, learning rate as “the single most important hyper-parameter” <cit.> is shown to be an effective regularizer <cit.>. A large initial learning rate is able to help escape spurious local minima that do not generalize <cit.> and avoid overfitting noisy data <cit.>. As another explicit regularization, average network weights along the trajectory of training is shown to find flatter minima and lead to better generalization <cit.>. In contrast, data augmentation <cit.>, as an implicit regularization, improves generalization by increasing the diversity of training examples without modifying the model's effective capacity <cit.>. Focusing on training with label noise, regularization techniques mentioned above are all widely used in state-of-the-art methods <cit.> as a default setting without exploiting their full potential.
§ PROPOSED METHOD
In the context of learning with noisy labels, the true distribution of training data is typically represented by 𝒟= {(x,y )| x ∈𝒳, y ∈1, …,K}. Here, 𝒳 denotes the sample space, and 1,…,K represents the label space consisting of K classes. However, due to label errors during data collection and dataset construction, the actual distribution of the label space is often unknown. Therefore, we have to rely on a noisy dataset 𝒟̃= {(x,ỹ)| x ∈𝒳, ỹ∈1, …,K} with corrupted labels ỹ to train the model. Our goal is to develop an algorithm that can learn a robust deep classifier from these noisy data to accurately classify query samples.
In the following, we first elaborate on the regularization strategies we suggest to be incorporated with standard training with cross-entropy loss: sharp learning rate decay schedule, strong and weak augmentations, and model weight moving average. Then, based on this regularized cross-entropy model, we present a learning algorithm that learns with consistent examples and semi-supervised learning techniques.
§.§ Sharp Learning Rate Decay
Neural networks, due to their impressive model capacities, may inadvertently memorize and overfit noisy labels while training. As demonstrated by <cit.>, a larger initial learning rate can effectively suppress this unwanted noise memorization in the input samples. Our empirical observations affirm that this regularization effect also extends to noise present in labels. As depicted in Figure <ref> (b), when a large initial learning rate (set to 0.1) is used, the loss of clean samples decreases while the loss associated with noisy samples remains high. This indicates that the noise is not fully absorbed when the learning rate is large.
However, to optimize the fitting of clean labels and the learning of complex task-specific patterns, a reduction in the learning rate is required. We investigated various learning rate decay schedules commonly utilized in neural network training: cosine annealing decay, step decay, and gradual decay. Our findings suggest that a gradual decrease in the learning rate could lead the network to converge to a non-generalizable, sharp local minimum, as demonstrated in Figure <ref> (c). Interestingly, we found that a sudden, sharp decrease in the learning rate could potentially enable the model to escape these spurious local minima, as depicted in Figure <ref> (c, d).
Hence, we recommend implementing a sharp learning rate decay strategy, where the learning rate is abruptly reduced to 1% of the original value once the training loss has stabilized. This is followed by another swift reduction of 1% after a few epochs of training. Our proposed learning rate scheduler, termed as the sharp learning rate decay schedule, exhibits robustness against label noise, as shown in Figure <ref> (d).
§.§ Data Augmentations
Data augmentation, a broadly accepted approach for expanding datasets, is renowned for its ability to enhance model generalization and robustness <cit.>. This technique entails applying diverse transformations to input data, thereby producing realistically viable variations. The gamut of these transformations spans from straightforward alterations like random cropping and flipping to more intricate operations such as cutout <cit.> or image mixing <cit.>. The gradations of these augmentations' complexity and intensity allow us to classify them into two categories: “weak” and “strong”.
Weak augmentations are designed to introduce more subtle variations in the data, thereby ensuring a steady learning trajectory. Conversely, strong augmentations are tailored to incorporate more pronounced data variations, thus challenging the model to glean more robust and generalizable features. Nevertheless, when learning with noisy labels, weak augmentations often fall short in preventing model overfitting, while the distortions induced by strong augmentations could significantly alter image structures, complicating the learning process for the model.
In light of these challenges, we propose an effective augmentation strategy that satisfies two primary conditions: (1) it enhances the model's generalization capabilities and mitigates the overfitting to noisy labels, and (2) it preserves the model's loss modeling and convergence properties without causing harmful impact. Our approach amalgamates the benefits of weak and strong augmentations to strike a balance between these requirements. Weak augmentations contribute to stability, while strong augmentations foster model robustness and resilience against noisy labels. This dual strategy is designed to harness the strengths of both augmentation types, ultimately boosting the model's performance when learning with noisy labels.
§.§ Model Weights Average
A unique challenge of learning with noisy labels often arises from the sequential fitting of the model to clean labeled samples, noisy labeled samples, and eventually, overfitting to all noisy samples <cit.>. This progression frequently culminates in suboptimal generalization performance. To counteract this issue, we turned to the model weights averaging technique, derived from bootstrapping <cit.>. This powerful tool enhances model stability and boosts generalization performance <cit.>, by aggregating model weights from different stages of training.
Notably, this regularization draws on the insight from <cit.> that model weights averaging can facilitate the model's escape from local minima and guide it towards wider optima in the loss landscape. These wider optima represent more robust solutions and are associated with enhanced generalization performance. This feature is particularly beneficial when dealing with noisy labels, as it bolsters the model's robustness against label noise.
In this work, we adopted the exponential moving average (EMA) <cit.> as our model weights averaging strategy. As illustrated in Figure <ref>, during training, we updated the online model using gradient back-propagation, while the offline EMA model was updated using the exponential moving average of the online model weights. The offline model is used as our final model for evaluation. This strategy, paying substantial attention to the early model weights primarily learned on clean labeled samples, helps preserve the model's capacity to classify clean samples accurately, a significant advantage when noisy labels are subsequently introduced afterward during training.
§.§ Combining with Semi-Supervised Learning
Currently, methods proposed for training with noisy labels often incorporate semi-supervised techniques to boost model performance <cit.>. Existing methods usually divide the dataset
into confident samples and unconfident samples based on the model's predictions. Confident samples are used as labeled samples while unconfident samples are used as unlabeled samples. In order to successfully use these techniques, the reliability of a base model's prediction is vital. We show that our simple baseline RegCE is able to provide more accurate information to guide training in semi-supervised learning, which further boosts the model performance.
Following Ref. <cit.>, we obtain two views of the original inputs by data augmentations. We then use RegCE-trained model to obtain the predictions of the two views. We consider the samples in which the model's predictions are consistent between the two views and consistent to the label as labeled data and other inconsistent samples as unlabeled data. Once the data is divided into labeled and unlabeled subsets, we apply semi-supervised learning approach MixMatch <cit.> to train the final model.
§ EXPERIMENT
§.§ Datasets and Implementation Details
Datasets:
We evaluate our method on two synthetic datasets with different noise types and levels, CIFAR-10 and CIFAR-100<cit.>, as well as three real-world datasets, CIFAR-N<cit.>, Animal-10N<cit.> and Clothing-1M<cit.>. CIFAR-10 and CIFAR100 both contain 50k training images and 10k testing images, each with a size of 32×32 pixels. CIFAR-10 has 10 classes, while CIFAR-100 contains 100 classes. The original labels of these two datasets are clean. CIFAR-N consists of re-annotated versions of CIFAR-10 and CIFAR-100 by human annotators. Specifically, CIFAR-10N contains three submitted label sets (i.e., Random 1, 2, 3) which are further combined to have an Aggregate and a Worst label. CIFAR-100N contains a single human annotated label set named Noisy Fine. Animal-10N has 10 animal classes with 50k training images and 5k test images, each with a size of 64×64 pixels. Its estimated noise rate is around 8%. Clothing-1M has 1 million training images and 10k test images with 14 classes crawled from online shopping web sites.
Synthetic Noise:
Following previous works<cit.>, we explore two different types of synthetic noise with different noise levels for both CIFAR-10 and CIFAR-100 datasets. For symmetric label noise in both datasets, each label has the same probability of being flipped to any class, and we randomly select a certain percentage of training data to have their labels flipped, with the range being {20%, 40%, 50%, 60%, 80%}. For asymmetric label noise in CIFAR-10, we follow the labeling rule proposed in <cit.>, where we flip labels between TRUCK → AUTOMOBILE, BIRD → AIRPLANE, DEER → HORSE, and CAT ↔ DOG. We randomly choose 40% of the training data and flip their labels according to the asymmetric labeling rule. For asymmetric label noise in CIFAR-100, we also randomly select 40% of the training data and flip their labels to the next class in the label space.
Baseline Methods:
Semi-supervised learning (SSL) can significantly enhance performance. For fairness, we compare our proposed RegCE with and without SSL techniques separately.
For comparison without SSL, we primarily compare RegCE with several robust loss function methods:
1) Cross entropy loss.
2) Forward<cit.>, which corrects loss values using an estimated noise transition matrix.
3) GCE<cit.>, which takes advantage of both MAE loss and CE loss and designs a robust loss function.
4) Co-teaching<cit.>, which maintains two networks and uses small-loss examples for updates.
5) LIMIT<cit.>, which introduces noise to gradients to avoid memorization.
6) SLN<cit.>, which adds Gaussian noise to noisy labels to combat label noise.
7) SL<cit.>, which employs CE loss and a reverse cross entropy loss (RCE) as a robust loss function.
8) APL (NCE+RCE)<cit.>, which combines two mutually boosted robust loss functions for training.
9) CTRR<cit.>, which proposes a novel contrastive regularization function to address the memorization issue and achieves state-of-the-art.
For comparison with SSL, we utilize the previously mentioned RegCE+semi method, which incorporates MixMatch<cit.>, to compare it with other state-of-the-art LNL methods that are also combined with SSL techniques:
1) DivideMix<cit.>, which first leverages SSL and a mixture model to effectively handle noisy labels.
2) CORES<cit.>, which proposes a novel sample sieve framework to effectively identify and remove noisy instances during training.
3) ELR<cit.>, which utilizes the early-learning phenomenon to counteract the influence of the noisy labels on the gradient of the CE loss.
4) PES<cit.>, which presents a progressive early stopping method to better exploit the memorization effect of DNNs.
5) SOP<cit.>, which models the label noise, and exploit implicit algorithmic regularizations to recover and separate the underlying corruptions.
Implementation Details:
For a balanced comparison, we employed ResNet18<cit.> as the foundational architecture for all datasets. The sole exception was for the CIFAR-N dataset, for which we utilized ResNet34<cit.>. Our training parameters were set as follows: a batch size of 256, an initial learning rate of 0.1, and we implemented the SGD optimizer with a momentum of 0.9 and a weight decay of 5e-4. All experiments were conducted over a total of 200 epochs when not utilizing SSL techniques, and extended to 300 epochs when incorporating SSL techniques. Specifically, for experiments devoid of SSL, the learning rate was drastically reduced to 1% of the initial value when the training loss plateaued, and dropped another 1% after a few subsequent epochs of training. For experiments employing SSL, we initially trained for 100 epochs using RegCE to procure a robust initial model, which was then used to generate high-quality confident samples. These confident samples were then used for an additional 200 epochs of training, using the semi-supervised learning method MixMatch, as proposed in <cit.>. To stabilize the training process across all experiments, we used the exponential moving average (EMA)<cit.>. Reported results are the mean and standard deviation computed over three independent runs. Further implementation details can be found in the Supplementary Material.
§.§ Classification Performance Analysis
Results on Synthetic Datasets:
As shown in Table <ref>, the proposed RegCE method was evaluated on the CIFAR-10 dataset under varying levels of label noise, achieving peak accuracy scores at noise levels of 20% and 40%. Even when the noise level increased to 60%, RegCE sustained its performance and continued to yield results comparable to those of CTRR at an extreme noise level of 80%. When applied to the CIFAR-100 dataset, RegCE consistently outperformed all other methods across all noise levels. For instance, at a noise level of 20%, it achieved an accuracy of 74.84%, markedly higher than CTRR's 70.09%. This performance trend was maintained up to a noise level of 80%, where RegCE achieved an accuracy of 47.07%, exceeding CTRR by more than three percentage points. The results validate the effectiveness of RegCE as a strong baseline in addressing label noise.
As detailed in Table <ref>, under the semi-supervised learning scenario, our method, RegCE+semi, consistently achieved the highest accuracy across all noise levels on both the CIFAR-10 and CIFAR-100 datasets. This result is noteworthy, considering the increased complexity of CIFAR-100 compared to CIFAR-10. For instance, at an extreme noise level of 80% on the CIFAR-100 set, RegCE+semi attained an accuracy of 66.4%, a score notably higher than the next best performing method, PES (semi), which reached 61.6%. These results suggest that our method can effectively utilize more accurate information to guide the training process in a semi-supervised learning environment.
Results on Real-world Datasets:
Table <ref> illustrates the superior performance of our proposed RegCE method on the Animal-10N and Clothing-1M datasets, compared to several baseline methods. By consistently outperforming all other methods, RegCE achieves the highest test accuracy of 88.31% and 73.75% on Animal-10N and Clothing-1M, respectively.
The efficacy of our RegCE+semi method is further highlighted in Table <ref> on the CIFAR-N dataset, under a variety of label noise scenarios. Across all settings, RegCE+semi consistently secures the highest test accuracy. Notably, in the “Worst” scenario, characterized by the highest noise levels, our method excels above all others. Moreover, even under the demanding “Noisy Fine” scenario, RegCE+semi manages to maintain the highest accuracy, demonstrating its robustness and resilience against complex noise structures.
§.§ Ablation Studies
In this section, we conduct ablation studies to analyze how each component affects the training with label noise performance. We study the three regularization strategies of our method: (a) we choose an initially large learning rate and then suddenly decay it to 1% of the original value when the training loss does not improve; (b) we choose to apply both strong and weak augmentation together. We observe that solely using either weak or strong augmentation results in worse performance, and (c) we average the model's weight during training and use this averaged model for evaluation. We also evaluate the model's sensitivity to hyperparameters such as initial learning rate, weight moving average momentum, etc. (see details in the appendix).
Table <ref> shows the results of our ablation studies on CIFAR-100 with 80% of symmetric label noise. In general, each regularization strategy provides a performance boost, and the model is best performed with all three strategies used together. Figure <ref> shows that solely using weak augmentation results in overfitting to label noise while strong augmentation results in slow convergence, both result in worse performance.
Figure <ref> illustrates the impact of learning rate decay schedules on models using Grad-CAM. Models robust to label noise consistently maintain Grad-CAM output during training when guided by the true label (middle row). Label noise may drive the model to absorb irrelevant background features. When label noise is minimal, the model accurately focuses on relevant features (top row). However, increasing label noise can misdirect the model's attention, lowering performance. A well-selected decay schedule can mitigate the effect of noisy labels, leading to a more task-focused model. Thus, a robust model can maintain consistent Grad-CAM output during training, emphasizing correct input regions despite label noise. Further research is needed to better understand the relationship between learning rate decay schedules, label noise, and model attention mechanisms for developing more robust models.
0.58
tableAblation study evaluating the influence of sharp learning decay, strong and weak augmentations, and model moving average. The mean accuracy computed over three noise realizations is reported.
1!
Method 8c
LR 55 51 55 55 51 51 55 51
Aug 55 55 51 55 51 55 51 51
EMA 55 55 55 51 55 51 51 51
Test Acc. 31.76 41.15 36.82 40.02 45.05 44.14 43.09 47.82
0.4
< g r a p h i c s >
figureThe performance comparison of different strategies of augmentations.
§ CONCLUSION
In this paper, we introduced RegCE, an extremely simple baseline method for learning with noisy labels that adopts several conventional regularization strategies. We demonstrated that this straightforward approach could match, and often outperform, the current state-of-the-art complex mechanisms developed specifically to tackle label noise.
Our findings underline the surprising power of de facto regularization techniques such as multi-step learning rate decay, extensive data augmentation, and model weight averaging. These techniques, when combined, showed to be effective in enhancing the robustness of neural networks against label noise, which is often an overlooked aspect in the current literature.
Furthermore, we explored the potential of incorporating semi-supervised learning techniques into our baseline method, which can further boost the performance.
Our extensive empirical results on various datasets, both synthetic and real-world, further corroborate our claims. These findings motivate a rethinking of the prevalent complex approaches toward label noise and call for further exploration of the effectiveness of these simple regularization strategies for training with label noise. Future work could extend the application of our baseline method to other domains and tasks, as well as investigate other potential de facto regularization techniques that could further enhance the robustness of neural networks against label noise.
Limitations The primary limitation is that we only test the method's potential when combined with semi-supervised learning techniques. In fact, the baseline method could potentially leverage other techniques like self-supervised learning, which does not require labels during training. Additionally, it's worth noting that while our method is not explicitly tailored for Convolutional Neural Networks (ConvNets), the majority of our experiments were conducted on ConvNets. Therefore, further investigation is necessary to assess its performance on alternative architectural models such as transformers.
Broader Impact
This work holds promise in pushing forward the progress of machine-learning techniques that can be used in situations where collecting precise annotations is expensive. This is particularly crucial in domains like medicine, where machine learning has immense potential for making a positive impact on society.
plain
The appendix is organized as follows:
* In Appendix <ref>, we report ablation studies on the influence of different hyperparameters of RegCE.
* In Appendix <ref>, we describe the implementation details of our proposed method RegCE, including a description of the data preprocessing and various training settings.
§ ABLATION STUDIES
Figure <ref>(a) presents the correlation between the learning rate (lr) decay rate and the corresponding test accuracy. It shows the fact that, after a period of training with a large learning rate, it becomes crucial to reduce the learning rate significantly. This strategy enables the model to better fit clean labels and discern complex patterns for the task, thereby boosting test accuracy. Figure <ref>(b) focuses on studying the impact of the sharp decay point. It shows that the model requires more training in the early stages to learn the data's features and patterns. Starting the decay too early may prevent the model from fully learning during this critical phase, resulting in decreased performance. Only by initiating the decay at the appropriate time can the model effectively leverage the training data and enhance performance. Figure <ref>(c) presents the momentum value of EMA model how to effect the test accuracy. Selecting an appropriate EMA momentum value is crucial. A low momentum value may cause the model to react too quickly to new model weights, potentially overfitting to recent trends and ignoring longer-term patterns. Conversely, a high momentum value may cause the model to react too slowly, potentially underfitting and not adequately capturing recent trends.
§ TRAINING DETAILS
§.§ Data Preprocessing
Our preprocessing procedures for all datasets incorporate a combination of weak and strong augmentation techniques. Weak augmentations include random cropping and flipping, while strong augmentations comprise cutout, grayscale transformations, color jitter, and Gaussian blur. In addition to these, for experiments involving semi-supervised learning methodologies, we also utilize mixup, a vital element of the MixMatch technique. We perform two rounds of data augmentation on each image in a mini-batch of N images, where N is set to 128 in all experiments. This results in a larger batch size, that is, 2N images, serving as input to the model. These two rounds of augmentation serve different purposes: one for weak augmentations and the other for strong augmentations. As described in the main body of the paper, weak augmentations contribute to stability, while strong augmentations foster model robustness and resilience against label noise.
§.§ Training Settings
For our research, we employed datasets of CIFAR-10, CIFAR-100, CIFAR-N, and Animal-10N, utilizing the entirety of the training data for each epoch. For these datasets, we initiated the process with a randomly initialized model and established an initial learning rate of 0.1. In the case of the Clothing-1M dataset, we aimed for a fair comparison consistent with the methodology used in <cit.>. To this end, we randomly selected a balanced subset of 20.48K images from the noisy training data for every epoch, implemented an ImageNet pre-trained ResNet18 model, and set the initial learning rate to 0.01.
In conducting experiments utilizing semi-supervised learning methodologies, we employ confident samples as labeled data, and unconfident samples as unlabeled data. It's important to note that the proportion of confident to unconfident samples can vary substantially depending on the noise levels in the settings. To tackle the potential issue of class imbalance, we also adopt a class balance strategy, following the approach outlined in <cit.>. The difference is that we use class-balanced samplers for the data loaders, and sample the data to the same amount of the original data. With regards to the subsequent training process, we adhere to the original MixMatch<cit.> settings. Specifically, we set K = 2, T = 0.5, α = 0.75, with λ_u chosen from the set {5, 75, 150}.
|
http://arxiv.org/abs/2307.07458v1 | 20230714163816 | Passage-times for partially-homogeneous reflected random walks on the quadrant | [
"Conrado da Costa",
"Mikhail Menshikov",
"Andrew Wade"
] | math.PR | [
"math.PR",
"60J10 (Primary), 60G50 (Secondary)"
] |
Passage-times for partially-homogeneous reflected random walks on the quadrant
Conrado da Costa[0.95Department of Mathematical Sciences, Durham
University, Upper Mountjoy Campus, Durham DH1 3LE, UK.] Mikhail Menshikov[1] Andrew
Wade[1]
August 12, 2023
====================================================================================================================================================================
We consider a random walk on the first quadrant of the square lattice, whose increment law is, roughly speaking, homogeneous along a finite number of half-lines near each of the two boundaries, and hence essentially specified by finitely-many transition laws near each boundary, together with an interior transition law that applies at sufficient distance from both boundaries. Under mild assumptions, in the (most subtle) setting in which the mean drift in the interior is zero, we classify recurrence and transience and provide power-law bounds on tails of passage times; the classification depends on the interior covariance matrix, the (finitely many) drifts near the boundaries, and stationary distributions derived from two one-dimensional Markov chains associated to each of the two boundaries. As an application, we consider reflected random walks related to multidimensional variants of the Lindley process, for which the recurrence question was studied recently by Peigné and Woess (Ann. Appl. Probab. 31, 2021) using different methods, but for which no previous quantitative results on passage-times appear to be known.
Key words: Reflecting random walk; partial homogeneity; recurrence classification; passage-time moments; random walks in wedges; multidimensional Lindley process.
AMS Subject Classification: 60J10 (Primary) 60G50 (Secondary).
§ INTRODUCTION
§.§ Overview
Reflecting random walks on lattice orthants have received much attention, with motivation coming from many problems of applied probability in which coordinate processes are constrained to be non-negative, queueing theory being the primary example.
The interactions of the process with the boundary mean that such processes are typically spatially inhomogeneous, and much work has been done to understand asymptotic behaviour under the assumption of homogeneous behaviour when far from the boundary (in the “interior”). The literature is extensive, and for a selection of surveys and recent work we indicate <cit.>; <ref> below has a more specific discussion.
The classical case of maximal homogeneity in the quadrant, for example, supposes one increment distribution in the interior, one for the horizontal part of the boundary, and one for the vertical part (what happens at the corner is inessential). However, a consequence of such a strong restriction is that interior jumps cannot exceed one unit in length in the directions of the boundaries. Applications motivate replacing the maximal homogeneity assumption with partial homogeneity, which permits finite-range jumps but consequently requires a finite collection of increment distributions and admits more complex behaviour.
In the maximally homogeneous setting, the classification of recurrence, transience, and stability of the reflecting random walk in the quadrant is well understood (we discuss the literature in some more detail in <ref> below).
In the partially homogeneous setting, however,
only the case of non-zero drift in the interior
has been addressed completely. The aim of this paper is to study the zero-drift setting, which is more subtle, for reasons that we explain later (in the maximally homogeneous case, several decades passed between the solution of the non-zero and zero drift cases).
In particular, we establish a classification between recurrence and transience, and, in the recurrent case, provide quantitative bounds on the tails of return times to finite sets, with lower and upper bounds of matching power-law exponent.
Before explaining in full our model and stating our main result (in <ref>, below), in the next section we give an application of our result, which can be stated in isolation with minimal notation and which provides new tail bounds on passage times for two particular classes of reflecting random walks,
which are
multidimensional Lindley processes <cit.>.
§.§ A motivating application
Throughout the paper we write := {0,1,2,…}, := {1,2,3,…}, and := [0,∞).
In the present section, as well as in the more general
setting of <ref> below,
we will study a discrete-time, time-homogeneous Markov chain with state space := ^2.
In the present section, the Markov chains we consider are derived from a sequence of independent random variables, together with some constraints to keep the process in .
Let ζ, ζ_1, ζ_2, … be i.i.d. random variables on ^2.
For z=(x,y) ∈^2 define |z| := (|x|, |y|) ∈^2
and z^+ := ( max{x, 0}, max{ y, 0 } ), i.e.,
the operations act componentwise. Denote by · the Euclidean norm on ^2.
When vectors in ^2 appear in formulas, they are to be interpreted as column vectors,
although, for typographical convenience, we sometimes write them as row vectors. Here are the two Markov chains we consider.
The Lindley random walk with increment distribution ζ
is the Markov process L = (L_n , n ∈) on defined via
L_n+1 := ( L_n + ζ_n+1 )^+ , for n ∈.
The mirror-reflected random walk with increment distribution ζ is
the Markov process M = (M_n , n ∈)
on defined via
M_n+1 := | M_n + ζ_n+1 | , for n ∈.
These two models have been the subject of some interest in the last few years.
The Lindley random walk is a multidimensional analogue of the classical Lindley equation on ,
and has been studied in e.g. <cit.>.
The mirror-reflected random walk has been studied under the generic name “random walk with reflections” (see e.g. <cit.>), but in the present paper we avoid this terminology, since the model presented in <ref>
is also a random walk with reflections, but of a more general type.
Let e_1 := (1,0) and e_2 := (0,1) denote the orthonormal basis vectors of ^2,
and
:= (0,0), the zero vector. We make the following assumption on the distribution of ζ.
hyp:lindley-increments(A)
Suppose that [ ζ^ν ] < ∞ for some ν >2, and ζ =.
Suppose also that there exists R ∈ such that ( e_k^ζ≥ - R ) = 1 for k ∈{1,2}. Finally, suppose that
Σ = [ ζζ^ ] is
positive definite, and denote by ρ = Σ_12/√(Σ_11Σ_22)∈ (-1,1)
the associated correlation coefficient.
For a process Z = (Z_n , n ∈) taking values in ^2,
define the associated first passage time of Z into a ball of radius r >0 around the origin by
τ_Z (r) := inf{ n ∈ : Z_n ≤ r } .
The main result of this section, Theorem <ref>,
presents for the processes L and M conditions for transience and recurrence (recovering a variant of a recent result of Peigné & Woess <cit.>)
and also asymptotics for the passage times τ_L, τ_M, which we believe to be new.
The result will be obtained as a consequence of our more general result on partially-homogeneous random walks given in <ref> below.
Recall that the (principal branch of the) arc-cosine function arccos : [-1,1] → [0,π] is given by
arccosλ := ∫_λ^1 1/√(1-t^2) t , for -1 ≤λ≤ 1.
Let ζ satisfy hypothesis hyp:lindley-increments,
and consider
either Z=L,
the Lindley random walk,
or Z=M, the mirror-reflected random walk, as at Definitions <ref>
and <ref>, respectively. Suppose that the Markov chain Z is irreducible on . Then it holds that Z is recurrent if ρ > 0 and Z is transient if ρ < 0. Moreover, if ρ >0,
then for every r>0 there exists r_0>r such that τ_Z(r) defined by (<ref>) satisfies the tail asymptotics
lim_n →∞log_z ( τ_Z (r) > n)/log n = - ( 1 - π/2 arccos (- ρ)), for all z ∈ : z > r_0.
For the process L, the recurrence/transience part of Theorem <ref>
is contained in Theorem 1.1(c) of <cit.>,
whose proof combines deep results of Denisov & Wachtel on exit times from cones <cit.>
with a structural result, Lemma 1.3 of <cit.>,
which relates recurrence to integrability of the time for the ^2-valued random walk with increment distribution -ζ to exit from a quadrant.
The fact that the expected time for a random walk to exit ^2 is finite if and only if the increment correlation is negative seems to originate with Klein Haneveld & Pittenger <cit.>: see Example 2.6.6 of <cit.> for a short proof (in the context of Brownian motion, the result goes back further, to Spitzer <cit.>).
The recurrence/transience result for the process L transfers to the process M using Lemma 5.1 of <cit.>.
The phenomenon of recurrence/transience behaviour being driven by covariance is explored in a related but different setting in <cit.>.
As far as we are aware, there are no previous results on the tails of return times to compact sets for either L or M.
§.§ Remarks on the literature
In the maximally homogeneous setting, the classification of recurrence, transience, and stability of the reflecting random walk in the quadrant
for the case with non-zero drift in the interior goes back to Kingman <cit.> in the early 1960s and
Malyshev <cit.> in the early 1970s, with subsequent refinements including <cit.>.
Generically, the classification depends on the drift vector in the interior and the two angles produced by the drifts at the boundaries (the reflections): see <cit.> for overviews. The Foster–Lyapunov approach was
outlined by Kingman <cit.> and (apparently independently) by Malyshev <cit.>, the latter in response to a plea from Kolmogorov for a “purely probabilistic” alternative to the analytic approach from <cit.>.
The recurrence classification for the case of maximally homogeneous reflecting random walk on ^2 with zero-drift in the interior
was given in the early 1990s in <cit.>; see also <cit.>.
In this case, generically, the classification depends on the increment covariance matrix in the interior as well as the two boundary reflection angles. Passage-time moments were studied in the key work <cit.>, with refinements and extensions provided in <cit.>.
Since the 1970s, a goal has been to relax the maximal homogeneity assumption;
Zachary <cit.>, for example,
is motivated to do so in an analysis of loss networks.
Malyshev & Menshikov <cit.>, Fayolle et al. <cit.>, and
Zachary <cit.> all consider partially homogeneous random walks on ^2, with similar homogeneity assumptions to our model of <ref>, but in the case where the drift in the interior is non-zero.
The non-zero drift
case is
simpler (see Remark <ref><ref> below). In the case of zero-drift in the interior, the only work we know that treats the partially homogeneous setting is Menshikov & Petritis <cit.>,
which is close to the present work in aims and philosophy,
but which imposes a simplifying structural assumption (see
Remark <ref><ref> and <ref> below for details).
There is some structural similarity between the present setting and the case of random walks on orthants ^d for d ≥ 3, homogeneous on lower-dimensional faces of the boundary <cit.>;
the classification for the d-dimensional problem requires knowledge of the stationary distribution for some (d-1)-dimensional problems. The case d=3 is somewhat tractable, but d ≥ 4 leads to daunting complexity;
again, the zero-drift case is particularly challenging, and is only partially understood, even in d=3.
Finally, we comment on terminology.
We adopt “partial homogeneity”, following Borovkov <cit.> and Zachary <cit.>, for example. For the same concept,
Fayolle et al. <cit.> use the phrase
“limited state dependency”. “Maximal homogeneity” is used
in <cit.> for what we call partial homogeneity, but in <cit.> the same phrase is used exclusively for the case R=1 (in our notation), and we prefer to maintain the latter distinction.
§ MODEL AND MAIN RESULTS
§.§ Partially homogeneous random walk
In this section, we will study a class of
time-homogeneous Markov chains with state space = ^2 and one-step transition function p:×→ [0,1].
That is, suppose that for every initial state z ∈ we have a probability space (Ω, , _z) equipped with a process Z = (Z_0, Z_1, …) with Z_n = (X_n,Y_n) ∈ for all n ∈, such that
_z (Z_0 = z_0, Z_1 = z_1, …, Z_n = z_n ) = z= z_0 ∏_i=0^n-1 p(z_i,z_i+1), for all n ∈,
with the usual convention that an empty product is 1. For example, Z might be a Markov chain with transition function p on a probability space (Ω, , ) with an arbitrary initial -distribution for Z_0, and then _z ( · ) = ( ·| Z_0 = z) is the probability law conditioned on the initial condition being equal to z.
We write _z for expectation with respect to _z. Write _n := σ (Z_0, Z_1, …, Z_n) ⊆ for the σ-algebra generated by Z_0, …, Z_n, so that (<ref>) implies that _z ( Z_n+1 = w |_n ) = p (Z_n, w), a.s.; in such expressions, where the initial state z plays no role, we often drop the subscript and write just .
For k ∈, define k := { x ∈ : 0 ≤ x ≤ k-1 }
and k := { x ∈ : x ≥ k }.
Our main structural assumption about the law of Z is partial homogeneity; to formulate this, we
partition into 4 regions, defined through an integer parameter R ∈, as follows.
* The interior is := R×R.
* The horizontal boundary is
: = R×R,
and the vertical boundary is : = R×R;
we write := ∪ and call simply the boundary.
* Lastly, the corner is the finite region : = R×R.
Define the first passage time of A ⊆ by
τ_Z(A) : = inf{n ∈ : Z_n ∈ A},
and let B_r := { z ∈^2 z ≤ r} denote
the closed Euclidean ball with centre at the origin and radius r >0;
in the special case A=B_r, we write simply τ_Z (B_r) = τ_Z (r),
to coincide with the notation at (<ref>).
Here are our assumptions.
hyp:partial_homogeneity(H)
Partial homogeneity.
Suppose that for probability mass functions on ^2
denoted by
, p_1 (y ; · ) (y ∈R), and
p_2 (x; · ) (x ∈R),
it holds that, for all z,w ∈,
p (z, w ) = (w-z) if z ∈ ;
p_1 (y ; w-z) if z = (x,y) ∈;
p_2 (x ; w-z) if z = (x,y) ∈.
hyp:irreducible(I)
Irreducibility.
Suppose that (i) for all z, w ∈∪,
there exists n ∈ such that ( Z_n = w, n < τ_Z(∪) | Z_0 = z) >0,
and
(ii) for all z, w ∈∪,
there exists n ∈ such that ( Z_n = w, n < τ_Z ( ∪ )| Z_0 = z) >0.
hyp:moments(M)
Bounded moments.
For as in hyp:partial_homogeneity, there exists > 2 for which ∑_z∈^2 z ^(z) < ∞.
Moreover, for p_1, p_2 as in hyp:partial_homogeneity,
suppose that there exist ν_1, ν_2 > 1 for which ∑_z∈^2 z ^ν_1 p_1 (y; z) < ∞ for all y ∈R
and ∑_z∈^2 z ^ν_2 p_2 (x; z) < ∞ for all x ∈R. Denote ν := min ( , ν_1, ν_2) > 1.
hyp:zero_drift(D)
Zero interior drift. For as in hyp:partial_homogeneity, we have ∑_z ∈^2 z (z) =.
[label=(*)]
* Given z = (R,R) ∈,
we have p(z,z') = 0 unless z' = (x',y')
satisfies x' ≥ 0, y' ≥ 0 (else z' ∉).
Hence the partial homogeneity assumption hyp:partial_homogeneity
implies that jumps towards the boundaries are uniformly bounded, in the sense that satisfies
(x,y) =0 , whenever x ∧ y < - R.
*
The hypothesis hyp:irreducible
ensures that for any two states in ∪_k,
there is a finite-step, positive-probability path between them that avoids the opposite boundary and the corner. A consequence is that for Z all states in ∖ communicate, and hence so do all states in ∖ B_r for r > R √(2); it may be, however, that Z is not irreducible on the whole of (see <ref> below for such an example).
By hypothesis hyp:moments there exists the 2× 2 real matrix
Σ := ∑_z ∈^2 (z) z z^.
Then
Σ is symmetric,
non-negative definite, and, by hypothesis hyp:partial_homogeneity, satisfies
_z [ (Z_1 - Z_0) (Z_1 - Z_0)^ ] = Σ, for all z ∈.
hyp:full_rank(Σ)
We assume that the matrix Σ is positive definite, i.e., that Σ > 0.
Note that hyp:full_rank is not a consequence of hyp:irreducible; one can construct, for example, walks that take jumps only (-1,+1) and (+1,-1) in but which nevertheless satisfy hyp:irreducible.
Assuming hyp:irreducible, all states in ∖ B_r for r > R √(2) communicate
(see Remark <ref><ref>). Hence it holds that either
_z ( τ_Z (B_r) < ∞ ) =1 for all r >R √(2) and every z ∈
(in this case we say Z is recurrent)
or
_z ( τ_Z (B_r) = ∞ ) > 0 for all r >R √(2) and every z ∈∖ B_r
(in which case Z is transient).
Moreover, for γ∈
it is the case that either _z [ τ_Z (r)^γ ] < ∞ for all r >R √(2) and every z∈, or else
_z [ τ_Z(r)^γ ] = ∞ for all r >R √(2) and every z ∈∖ B_r. The aim of this paper is to provide classification criteria for these qualitative and quantitative descriptions of the asymptotic behaviour of passage times.
§.§ Classification theorem and tail asymptotics
Here is our main result for the partially-homogeneous random walk on the quadrant,
which classifies recurrence and transience, and, moreover, quantifies
recurrence in terms of tails of moments of hitting times;
recall the definition of τ_Z(r) from (<ref>).
Suppose that hyp:irreducible, hyp:partial_homogeneity, hyp:moments,
hyp:zero_drift, and hyp:full_rank hold.
Then there exists a characteristic parameter χ∈,
defined via (<ref>) below, for which the following hold.
*
If χ >0 then the random walk Z is recurrent
and if χ <0 then Z is transient.
*
Suppose that χ >0. Then, for
all r>R√(2), >0,
and every z ∈,
there exists C := C(r,,z) < ∞ for which,
_z ( τ_Z (r) > n ) ≤ C n^-ν∧χ/2+,
for all n ∈.
*
Suppose that the constant ν in hyp:moments satisfies ν > 2 ∨χ.
Then, for
all r>0 there exists r_0 > r such that, for every >0,
there exists c > 0 so that,
for all z ∈∖ B_r_0,
_z ( τ_Z (r) > n ) ≥ c n^-χ/2-,
for all n ∈.
[label=(*)]
*
The parameter χ
is a function of the interior covariance matrix Σ
from (<ref>) and the angles subtended at the boundaries
by two effective boundary drift vectors,
_1 and _2 defined at (<ref>) below. A full description follows these remarks.
*
Compactly, we can summarize (<ref>) and (<ref>) through the log-asymptotic
lim_n →∞log_z ( τ_Z(r) > n )/log n = - χ/2 ,
which holds for every r>R√(2)
and every z ∈∖ B_r_0 for appropriate r_0 >r, provided that the increment moment bound hyp:moments holds with ν > 2 ∨χ.
In terms of the passage-time moments defined through (<ref>),
it follows from (<ref>) that for every A ⊆
that contains at least one element of ∖,
and all z ∈,
_z[ τ_Z(A)^γ ] < ∞, for all γ∈(0, ν∧χ2).
On the other hand, provided ν > 2 ∨χ,
it follows from (<ref>) that for
every finite A ⊂ and all z ∈ for which inf_y ∈ A x - y is large enough,
_z[ τ_Z(A)^γ ] = ∞, for all γ > χ2 .
*
If in addition to hyp:irreducible one imposes the condition that there exists >0 such that
inf_u ∈^d_z ( x^ ( Z_1- Z_0 ) ≥ x ) ≥ for all z ∈,
then one can replace r_0 by r in
Theorem <ref><ref>, since there is uniformly positive probability to go from z ∈∖ B_r
to ∖ B_r_0 without visiting B_r. Condition (<ref>) is a form of uniform ellipticity,
and it is implied for z ∈ by hyp:zero_drift and hyp:full_rank, but the hypotheses of Theorem <ref> do guarantee that (<ref>) holds for all z ∈.
*
Menshikov & Petritis (MP) <cit.>
obtain a version of Theorem <ref>
under an additional left-continuity hypothesis that jumps are at most unit size in the south and west directions. (The result of <cit.> is formally weaker than ours, as it establishes non-existence of moments, as at (<ref>), but not the lower tail bound (<ref>).)
The proof of <cit.> has several common elements with ours, as we describe at the appropriate points below,
but takes an algebraic approach to constructing Lyapunov functions, using a Fredholm alternative;
here we take a probabilistic approach, based on the mixing of the process near the boundary and what we call stabilization of the effective boundary drifts (see <ref> for details).
MP remark <cit.> that the left-continuity assumption “is important and cannot be relaxed” in their approach;
indeed, it provides a structural simplification, as we explain in <ref> below, along with further details of the model of <cit.>.
*
As mentioned in <ref>, the papers <cit.> all consider partially homogeneous random walks on ^2, with similar homogeneity assumptions to <ref>, but in the case where the interior drift is non-zero.
The non-zero drift case is simpler than the zero-drift case
treated in the present paper and by MP <cit.>, in two important respects. First,
the interior covariance plays no part in the classification, and second, that the boundary drifts only play a role when the corresponding one-dimensional process is ergodic, i.e., the interior drift has a negative component in the corresponding direction.
The technical implications of these two differences are that in the non-zero drift setting, firstly, one
can use simpler Lyapunov functions (e.g., “almost-linear” functions, to use Malyshev's terminology)
than those based on the harmonic functions from <ref>,
and, secondly, that the
necessary mixing that “homogenizes” the non-homogeneous boundary behaviour (through, e.g.,
the stabilization property)
is provided by the ordinary ergodic theorem, rather than the (deeper) ratio theorem for null-recurrent processes that we use here (see <ref>).
There are three main components that define the characteristic parameter χ at the heart of Theorem <ref>: first, a linear transformation of the
quadrant associated with the covariance matrix Σ; second, two
probability measures π_1, π_2 that describe the relative occupation
distributions of the process within each of the two boundaries; and, third,
the post-transformation reflection angles φ_1, φ_2 between the normal to the respective boundary and the drift vector averaged according to the relevant π_1, π_2.
First we explain the origin of the π_k. To do so,
define vertical and horizontal projections of p,
q_1, q_2 : ×→ [0,1], by setting for i, j ∈,
q_1( i , j ) := ∑_x ∈ p_1 (i ; (x, j ) ) if i ∈R,
∑_x ∈ ( (x, j-i) ) if i ∈R,
and
q_2( i , j ) :=
∑_y ∈ p_2 (i ; (j,y) ) if i ∈R,
∑_y ∈ ( (j-i,y) ) if i ∈R .
Note that ∑_j ∈ q_1 (i , j) =1 for all i ∈ and ∑_j ∈ q_2 (i,j) = 1 for all i ∈.
*
The transition functions q_1 and q_2 define irreducible Markov chains on .
*
There exist unique positive measures π_1, π_2 on that
have π_1 ( R ) = π_2 (R) =1 and
are invariant under q_1, q_2, respectively, i.e.,
for all j ∈,
∑_i ∈π_1 (i) q_1(i,j) = π_1 (j) , and ∑_i ∈π_2 (i) q_2(i,j) = π_2 (j).
*
Define
η_1: = inf{ n ∈ : X_n < R}, and η_2: = inf{ n ∈ : Y_n < R}.
Then, before time η_1,
Y evolves as a Markov chain with transition function q_1, i.e.,
( Y_n+1 = y |_n ) = q_1 ( Y_n , y ), on { n < η_1}.
Similarly,
( X_n+1 = x |_n ) = q_2 ( X_n , x ), on { n < η_2}.
Define μ_1, μ_2 : R→^2, giving the drift vectors in , respectively, by
μ_1 (i) := ∑_z∈^2 z p_1 (i ; z), and μ_2 (i) := ∑_z∈^2 z p_2 (i ; z) , for i ∈R,
and averages of these drifts according to π_1, π_2 given in Proposition <ref><ref> by
_1 := ∑_i∈Rπ_1 (i) μ_1 (i), and _2 := ∑_i∈Rπ_2 (i) μ_2 (i).
Under hypothesis hyp:full_rank, the positive-definite, symmetric matrix Σ has a (unique) positive-definite, symmetric square root Σ^1/2 with an inverse Σ^-1/2. For any orthogonal
matrix O, the 2 × 2 real matrix T = O Σ^-1/2 satisfies T Σ T^ = I (the identity).
If we stipulate that T e_1 is in the e_1 direction, and T e_2 has positive inner product with e_2,
this fixes O and gives us a unique T = (say) such that Σ^ = I
and
e_1^ e_1 > 0, e_2^ e_2 > 0, and e_2^ e_1 = 0 ;
see (<ref>) for an explicit formula for .
We also use to denote the associated linear transformation x ↦ x on ^2. Define the transformed process by _n := Z_n, n∈. Then, for z ∈, by linearity,
the zero-drift assumption hyp:zero_drift implies that
_z [ _1 - _0 ] = _z [ Z_1 - Z_0 ] = , for all z ∈,
(where we emphasize that _z refers still to the initial condition Z_0 = z). Also,
_z [ (_1 - _0) (_1 - _0)^] = _z [ (Z_1 - Z_0) (Z_1 - Z_0)^] ^ = I, for all z ∈,
by (<ref>) and choice of .
To define angles let us denote by
R_θ the anti-clockwise rotation around the origin of ^2 by angle θ, i.e., let
R_θ :^2 →^2 be defined by
R_θ(x,y) := (x cosθ -y sinθ, x sinθ + y
cosθ).
Recall from <ref>
that e_1 = (1,0) and e_2=(0,1) denote the canonical orthonormal basis vectors of ^2,
and is the origin.
Let and := { z ∈^2 : z =1}.
Given z,w∈^2 ∖{}, we say that θ∈ (-2π,2π) is the (oriented) angle between z and w if
z/z = R_θ(w)/w.
Note that if θ is the angle between z,w, then -θ is the angle between w and z.
Define
* φ_0 ∈ (0,π), the angle between e_1 and e_2.
* φ_1 ∈ (-π/2,π/2), the angle between R_π/2( e_1) and _1.
* φ_2 ∈ (-π/2,π/2), the angle between _2 and R_-π/2( e_2).
See Figure <ref> for an illustration.
With this notation in hand, we
define the characteristic parameter χ∈ introduced in Theorem <ref> by
χ : = φ_1 + φ_2/φ_0.
§.§ Organization of the rest of the paper
The rest of the paper is organized around three main elements to the proof of Theorem <ref>.
* We consider the transformation of the process under introduced at (<ref>), in order to obtain a process with canonical interior covariance, which is more convenient later. This results in transforming the quadrant into a wedge, and it is here that the angles that define χ via (<ref>) are exhibited. We describe this in more detail, together with some more explicit formulas than those given in <ref>, in <ref>.
* The contribution of the boundary behaviour to Theorem <ref> is via the reflection angles φ_1, φ_2 that enter into χ at (<ref>). These angles are defined in terms of and _1, _2 from (<ref>), and arise from a form of homogenization over the boundary: we describe and prove the stabilization result that captures this precisely in <ref>.
* The bulk of the proof is then using Foster–Lyapunov results to deduce recurrence, transience, and bounds on tails of passage times. These arguments, concluding in the proof of Theorem <ref>, are presented in detail in <ref>.
After the proof of Theorem <ref> is concluded,
we present in <ref> the proof of Theorem <ref>
and explain in detail how the prior work of Menshikov & Petritis <cit.>
fits as a special case of Theorem <ref> (see Remark <ref><ref>).
Finally, in two short appendices we present the Foster–Lyapunov tools that we apply (<ref>) and give some further intuition on the construction of the Lyapunov functions (<ref>).
Finally, before embarking on the proofs,
we emphasize that we have chosen to present the arguments in detail, even when some technicalities are similar to those from the prototype work on passage-time moments in wedges in <cit.> (we give appropriate references at such points in the text). In part we do so for the convenience of keeping <ref> self contained, but also because the exposition in <cit.> is in some places a little terse; the fact that the relevance of <cit.>
to modern work on Lindley-type processes does not seem to have been easily appreciated, for example, suggests to us that there is value in an exposition that gives both details and intuition, while attempting to remain of a readable length.
§ COVARIANCE, ANGLES, AND LINEAR TRANSFORMATION
This section puts some explicit geometrical formulae to the quantities
needed to define the key parameter χ from (<ref>).
First, recall the definition of the interior covariance matrix Σ
from (<ref>).
Representing Z_n = (X_n,Y_n) in Cartesian coordinates, write
Σ =
[ σ_1^2 κ; κ σ_2^2 ].
That is,
for z ∈,
σ_1^2 = _z [ (X_1 -X_0 )^2 ],
σ_2^2 = _z [ (Y_1 -Y_0 )^2 ],
and
κ = _z [ (X_1 - X_0) (Y_1 -Y_0 ) ].
Define also the correlation coefficient
ρ := ρ (Σ) := κ/√(σ_1^2 σ_2^2).
We choose : ^2→^2 to be the
linear transformation given by
: = 1/s σ_2[ σ_2^2 - κ; 0 s ], where s := √(Σ) = √(σ_1^2 σ_2^2 - κ^2);
note
that, by hypothesis hyp:full_rank, it is assumed
that σ_1, σ_2, and s are all strictly positive,
and that ρ defined at (<ref>) satisfies ρ∈ (-1,1).
Then given by (<ref>) satisfies Σ^ = I
and (<ref>), as we demanded in <ref>.
We write := ( )
for the image of the state space under the transformation ,
and similarly := ( ) and
_k := ( _k).
The probabilistic significance of at (<ref>) comes from the
transformations (<ref>) and (<ref>)
of the interior drift and increment covariance.
From (<ref>),
the basis vectors e_1, e_2 transform via
(e_1) = e_1 σ_2/s and (e_2) =(-κ,s)/(sσ_2) with s>0.
Hence the quadrant ^2 maps to the wedge := (^2) with angular span φ_0 ∈ (0, π) given by
φ_0 : = arccos( - κ/√(κ^2 + s^2)) = arccos (- ρ),
where arccos is defined at (<ref>) and ρ is given by (<ref>).
We define the directions of the transformed
effective boundary drifts by
v_k := (_k)/ (_k) , for k ∈{1,2},
where _1, _2 are defined in (<ref>).
Denote the unit normal vectors at the
boundaries of (^2) by n_1 := e_2 (the normal at the horizontal boundary) and n_2 := (sinφ_0, - cosφ_0). Recall the definition of the rotation R_φ from (<ref>).
For k ∈{1,2}, the
vectors v_k at (<ref>) define reflection angles φ_k ∈ (-π/2, π/2) by the
relations
v_1 =: R_φ_1 n_1, and v_2 =: R_-φ_2 n_2.
In Cartesian coordinates,
v_1 = (-sinφ_1, cosφ_1 ), and
v_2 = ( sin(φ_0 - φ_2), - cos(φ_0 - φ_2)).
One can then obtain explicit
formulae
for φ_1 and φ_2
in terms of the entries of Σ from (<ref>),
the angular span φ_0 of the wedge from (<ref>),
and angles θ_1, θ_2∈ (-π/2,π/2) such that
_1 = _1(-sinθ_1, cosθ_1 ), and _2 = _2(cosθ_2, -sinθ_2 ).
Then expressions for φ_1 and φ_2 can be given by
φ_1 = arctan(σ_2^2 sinθ_1 + κcosθ_1/s cosθ_1),
φ_2 = arctan( - (σ_2^2 cosθ_2 + κsinθ_2) cosφ_0 - s sinθ_2 sinφ_0/(σ_2^2 cosθ_2 + κsinθ_2) sinφ_0 + s sinθ_2 cosφ_0),
where
arctan: → (-π/2,π/2) is defined by
arctan x := ∫_0^x 1/1 + y^2 y .
The transformation of the quadrant and associated angles are depicted in Figure <ref> above.
The following example is relevant for Theorem <ref>.
[Orthogonal reflection]
Suppose that _1, _2
given by (<ref>) are non-zero and satisfy
_1 = _1 e_2 and _2 = _2 e_1. Then θ_1=θ_2=0, i.e., the effective reflection at both boundaries is orthogonal. In this case the formulas (<ref>) and (<ref>) (or some simple linear geometry) show that φ_1 = φ_2 = φ_0- π/2, so χ defined by (<ref>) is χ =2 - π/φ_0. Then χ >0 if and only if φ_0 > π/2, which, by (<ref>), is equivalent to ρ >0.
§ STABILIZATION OF BOUNDARY DRIFTS
The effective reflection vectors
_1 and _2 defined at (<ref>)
represent
drift that the process accumulates when it is near the boundary.
The purpose of this section is to
present and prove Proposition <ref>,
which formalizes this notion of stabilization of boundary drifts.
Define the normalized n-step drift (n ∈) of the process Z started from
z ∈
by
d^n_k (z) := _z [ Z_n -z ]/_z [∑_ℓ
=0^n-1Z_ℓ∈_k], where k ∈{1,2}.
We establish stabilization of d^n_k (z)
as n →∞, in the following sense.
For any >0 there is N ∈ such that for each k ∈{1,2} and for every n >N,
d_k^n (z) -_k < , for all z ∈_k with z > 2 R n.
The idea behind Proposition <ref> is that if z ∈ (say) with z large compared to n,
then over n steps, the jumps bound (<ref>) ensures that Z cannot reach _2 and the vertical component Y of Z up to time n evolves as an irreducible Markov chain on (see Proposition <ref>).
By hyp:zero_drift and hyp:partial_homogeneity, the numerator of (<ref>) changes (by μ_1(Y)) only when Y∈R, and so d^n_k (z) is
a ratio of expectations of an additive functional of Y and an occupation time. Proposition <ref> will then boil down to an application of the Doeblin ratio limit theorem,
which shows such a ratio is, for large n, close to a corresponding
ratio in terms of the invariant measure π_1 of Y. We introduce some preparatory notation.
Consider an irreducible, recurrent Markov chain ξ = (ξ_0,ξ_1,…) on .
Fix R ∈.
By a result of Derman (see e.g. <cit.>), there exists an invariant measure ϖ
(unique up to a scalar factor) for which ϖ (y) > 0 for all y ∈.
Define
π (y) := ϖ(y)/ϖ(R), for all y ∈R.
Then π is well defined (since ϖ is unique up to a constant factor) and is a probability measure on R with min_y ∈Rπ (y) >0.
Define occupation times associated with ξ by
L^ξ_n (y) := ∑_ℓ =0^n-1ξ_ℓ = y,
and
L^ξ_n (R ) := ∑_ℓ =0^n-1ξ_ℓ∈R =∑_y ∈R L^ξ_n (y).
The following ratio-ergodic lemma for additive functions of recurrent (but not necessarily positive recurrent) Markov chains is central to our stabilization property.
Let ξ be
an irreducible, recurrent Markov chain on , fix R ∈,
and define π by (<ref>) and L_n^ξ by (<ref>).
Take f : R→.
Then, for every y ∈,
lim_n →∞[ ∑_ℓ =0^n-1 f (ξ_ℓ ) ξ_ℓ∈Rξ_0 = y ]/[ L^ξ_n (R) ξ_0 = y ] = ∑_x ∈R f (x) π (x) .
Suppose that ξ_0 = y ∈.
By Fubini's theorem, linearity, and (<ref>), we have that
[∑_ℓ=0^n-1 f (ξ_ℓ) ξ_ℓ∈Rξ_0 = y]
= ∑_x ∈R f(x) [L^ξ_n (x)ξ_0 = y].
By irreducibility, the Doeblin ratio limit theorem (see <cit.> or <cit.>) says that, for π the probability measure defined by (<ref>),
lim_n →∞[ L^ξ_n(x) ξ_0 = y ]/[ L^ξ_n(R) ξ_0 = y ] = π (x) , for every y ∈.
The measure π as given by (<ref>)
can be interpreted in terms of an embedded Markov chain.
An invariant measure ϖ(y) for ξ is furnished by
the expected number of visits to y between visits to 0, i.e., for
τ := inf{ n ∈ : ξ_n = 0},
ϖ (y) = [ ∑_n=0^τ-1ξ_n = y ξ_0 = 0 ].
Define stopping times γ_0 := 0 and γ_k := inf{ n > γ_k-1 : ξ_n ∈R}
for k ∈. Then _k := ξ_γ_k defines a Markov chain on R (the embedded chain). It is not hard to see that
inherits irreducibility from ξ, and so, since R is finite,
is positive recurrent and has a unique stationary distribution.
Set τ' := inf{ k ∈ : _k = 0 }.
An invariant measure for
is given as a function of state u by
[ ∑_ℓ= 0^τ'-1_ℓ = u ξ'_0 = 0 ],
but, by construction of , this is the same as ϖ (u). Thus
the unique stationary distribution for is proportional to ϖ, and hence it must be π as given by (<ref>).
We write the proof for the case k=1; the case k=2 is similar.
Let Z be the
process with transition law p as defined in (<ref>), started from Z_0 = z ∈.
Fix n ∈, n >2.
Recall the definition of η_1 given in (<ref>). By (<ref>), we have _z ( e_1^ Z_ℓ > e_1^ z - ℓ R ) =1, for all ℓ∈.
In particular, if z ∈ has z > (n+2) R, then e_1^ z > (n+1) R, by the triangle inequality,
and hence e_1^ Z_ℓ > R for all ℓ≤ n. Hence
_z ( η_1> n ) = 1, for all z ∈ for which z > 2n R .
Recalling the
definition of the drift μ_1 from (<ref>), we observe
that by hyp:zero_drift, for all ℓ < n,
_z [ Z_ℓ+1 -Z_ℓ| Z_ℓ = (x,y) ] = ∑_w ∈^2 w p_1 (y ; w) y ∈R = μ_1 (y) y ∈R.
By
Proposition <ref>
and (<ref>), if z ∈ and z > 2nR, then Y_ℓ, ℓ≤ n, evolves as a Markov chain on with transition law q_1 as defined in (<ref>).
Define the associated occupation time L_n^Y as in (<ref>).
The
numerator of (<ref>) can
be expressed as
∑_ℓ = 0^n-1_z [ Z_ℓ+1 -Z_ℓ] =∑_(x,u) ∈^2∑_ℓ = 0^n-1_z [ Z_ℓ+1 -Z_ℓ| Z_ℓ = (x,u) ] _z [ Z_ℓ = (x,u) ]
=∑_u ∈Rμ_1 (u) ∑_ℓ = 0^n-1_z [ Y_ℓ = u ] =
∑_u ∈Rμ_1(u) _z[ L^Y_n(u)].
The denominator of (<ref>) can
be expressed as
∑_ℓ=0^n-1_z[Z_ℓ∈]
= ∑_ℓ=0^n-1_z[Y_ℓ∈R]
= ∑_u ∈R∑_ℓ =0^n-1_z [Y_ℓ =u] = _z [L^Y_n(R)].
In (<ref>)–(<ref>),
Y can be viewed as an irreducible, recurrent Markov process on with transition law q_1
whose stationary distribution is π_1 (by Proposition <ref>).
Therefore, by (<ref>), (<ref>),
we can apply Lemma <ref> to conclude that, for every >0 there is N ∈ such that for all
n ≥ N, and all z=(x,y) ∈ with z > 2Rn,
d_1(z,n) - _1 = ∑_u ∈Rμ_1(u) (_z[
L^Y_n(u)]/_z [L^Y_n(R)]- π_1 (u)) ≤.
Since >0 was arbitrary, the proof is complete.
We conclude this section with a proof of Proposition <ref>.
We first prove part <ref>. Observe that, if (x,i) ∈R×,
( Y_1 = j | Z_0 = (x,i) ) = ∑_x' ∈ ( Z_1 = (x', j) | Z_0 = (x,i) ) = q_1 (i, j),
by (<ref>), and this verifies (<ref>). A similar argument yields (<ref>). This proves <ref>.
Moreover,
if we define q_1^(1) (i,j ) := q_1 (i,j) and, for n ∈,
q_1^(n+1) (i,j) := ∑_k ∈ q_1 (i, k) q_1^(n) (k, j) ,
we claim that, for all n ∈ and all i,j ∈,
( Y_n = j, η_1 ≥ n | Z_0 = (x,i) ) ≤ q_1^(n) (i,j), for all x ∈R.
The n=1 case of (<ref>)
is true (with equality) by (<ref>);
an induction of Chapman–Kolmogorov type establishes the general case.
Indeed, from
the Markov property of Z,
( Y_n+1 = j , η_1 ≥ n+1 | Z_0 = (x,i ) )
= ∑_x' ∈R∑_k ∈ [ ( Y_n+1 = j |_n ) Z_n = (x',k)η_1 ≥ n | Z_0 = (x,i) ]
= ∑_x' ∈R∑_k ∈ q_1 (k, j) ( Z_n = (x',k), η_1 ≥ n | Z_0 = (x,i) ) ,
using (<ref>). Hence
( Y_n+1 = j , η_1 ≥ n+1 | Z_0 = (x,i ) )
≤∑_k ∈ q_1 (k, j) ( Y_n = k, η_1 ≥ n | Z_0 = (x,i) ) ,
and this provides the inductive step to verify (<ref>).
Fix i, j ∈ and take x, y ≥ R. From
the hypothesis hyp:irreducible on p_1, it follows that there exists n = n(i,j) ∈ for which
( Y_n = j, η_1 ≥ n | Z_0 = (x,i) ) ≥ ( Z_n = (y,j), n < τ_Z ( ∪ ) | Z_0 = (x,i) ) > 0 .
Combining this with (<ref>) verifies that for every i, j ∈ there exists n ∈
for which q_1^(n) (i,j) >0. Hence q_1 defines an irreducible Markov chain on ; a similar argument
applies to q_2.
This proves part <ref>.
Finally, to prove <ref>, the existence of an invariant measure unique up to scaling is due to the result of Derman <cit.> that we already used earlier in this section.
§ BOUNDS ON TAILS OF PASSAGE TIMES
§.§ Overview of the proofs
To prove the results in <ref>, we study the process Z on
using Lyapunov functions f : ^2 →,
such that we can apply results for -valued adapted processes satisfying suitable sub/super-martingale conditions. For these results to be most effective, one requires good control over the one-dimensional process f(Z), and, in particular, its expected (conditional) increments. An effective f, therefore, is one that homogenizes
the spatial heterogeneity of the process Z (zero drift in , reflection in ).
One way to achieve this is to consider an harmonic function h : ^2 →
applied to the transformed process = (Z) (see <ref>),
where is the linear transformation given in (<ref>).
Then h∘
will be harmonic for Brownian motion with infinitesimal covariance Σ, and so h∘ (Z) will be
approximately a martingale for Z in the interior .
If we can also arrange for h to be approximately a martingale for in the boundaries and (see <ref>), we will have a process that is almost a martingale everywhere in the state space (where the error implicit in “almost” is reduced with distance from the origin). By considering powers α >1 or α < 1, we get Lyapunov functions f = h^α such that f() is a sub/super-martingale, respectively, outside of a bounded set.
In the case of a simple boundary (i.e., R=1), one can achieve h() being almost a martingale
in
by
choosing h so
that the reflection vectors (i.e., one-step boundary drifts)
are tangent to the level curves of h.
For our complex boundary (R>1), there are, typically, many reflection vectors (μ_k(u) for each u ∈R)
which means that this approach is impossible. However, if, rather than the one-step drifts, we take drifts
over N-steps for large enough N, the mixing of the process in leads, via the stabilization
property described in <ref>, to well-defined (approximate) effective drift vectors _k for which the above strategy of matching level curves can be applied, with appropriate modifications.
Thus we work not with f applied directly to , but to a time-changed version of in which, in , time is compressed: this is described in the next subsection.
§.§ Time compression at the boundary
As described in <ref>,
to exploit stabilization (<ref>) we
consider a time-changed process ^N = ( ^N_n, n ∈) and
will deduce properties of the passage times of from knowledge of the passage times of ^N.
The (stochastic) time-change is given by a compression factor N ∈ and T_N : →
defined as T_N(0): = 0 and for n ∈,
T_N(n) : =
T_N(n-1) + 1 if Z_T_N(n-1)∈,
T_N(n-1) + N if Z_T_N(n-1)∉.
The process ^N is defined by setting
^N_n : = _T_N(n) = ( Z_T_N(n) ), for all n ∈.
From (<ref>), T_N(n) - T_N (n-1) ∈{1,N};
hence T_N is a bounded time-change in the sense
n ≤ T_N (n) ≤ N n, a.s., for all n ∈.
The next result shows that tails of return times for Z and ^N are comparable, up to constants (depending on Σ and the time-change factor N).
Recall that τ_Z (r) =
inf{n ∈ : Z_n≤ r} from (<ref>); analogously, for the process ^N, define
^N_Z (r) := inf{n ∈ : ^N_n≤ r}.
There exist constants c, C (depending only on Σ) with 0 < c ≤ C <∞ such that
for every z ∈, for all N ∈, and all n ∈,
_z ( _Z^N (C(a+NR)) ≥ n ) ≤_z ( τ_Z (a) ≥ n ) ≤_z ( _Z^N ( c a) ≥ n/N) .
Let c: = inf{ vv = 1} and let C: = sup{ vv = 1};
then, by hyp:full_rank, 0 < c ≤ C < ∞.
If _Z^N(c a) < n, then ^N_m≤ c a for some m < n, and hence Z_T_N(m)≤ a
for some m < n. Thus, by (<ref>),
Z_m≤ a for some m < Nn. Thus
if _Z^N( c a) < n then τ_Z(a) < Nn,
which gives the right-hand inequality in (<ref>).
Conversely,
suppose that τ_Z (a) < n, so Z_m ≤ a
for some m < n. By (<ref>),
there exists k such that m-N ≤ T_N (k) ≤ m;
by (<ref>), k ≤ m < n.
By (<ref>)
and the triangle inequality,
since Z_m ≤ a we must have Z_T_N (k)≤ a + 2 N R, say,
and thus ^N_k ≤ C (a + 2 N R) for some k < n. Hence
if τ_Z(a) < n then _Z^N(C(a + 2 NR)) < n,
which gives the left-hand inequality in (<ref>).
§.§ Multidimensional Taylor's theorem
We collect notation for multivariate calculus and describe the version of Taylor's theorem that we will use to estimate increments of our Lyapunov functions.
Let f : ^2 → have partial derivatives
of up to 3rd order all continuous over ^2 ∖{}. Let D_i denote the derivative operator in (Cartesian) direction e_i. Write ∇ f = (D_1 f, D_2 f)
for the gradient of f, and
H_f for the Hessian matrix of f, i.e., the matrix whose i,j component is D_i D_j f.
For ν∈{1,2}^m,
a string of length |ν| = m ∈, we use multi-index notation D_ν for
the mth order mixed derivative
D_ν_1⋯ D_ν_m.
Taylor's theorem with remainder says
f ( z + y ) - f(z) = y^∇ f(z) + 1/2 y^ H_f (z) y + _3^f (z,y) ,
for second-order approximation,
and, for first-order approximation
f ( z + y ) - f(z) = y^∇ f(z) + _2^f (z,y) ,
both (<ref>) and (<ref>) holding for all z, y with z ∈^2 ∖{}
and z+y ∈^2 ∖{};
for k ∈{2,3}, the kth-order remainder term _k^f (z,y) satisfies
| _k^f (z, y) | ≤ y ^ksup_η∈ [0,1]sup_|ν| = k| D_ν f (z + η y ) | , for all z, z+y ∈^2 ∖{}.
We also note the chain rule for the Hessian, which implies that, for α∈ and for z ∈{ f > 0 } := { z ∈^2 : f(z) > 0 },
H_f^α (z) = α f^α-1 (z) H_f (z) + α (α-1) f^α-2 (z) ( ∇ f (z) ) (∇ f (z) )^ .
§.§ Harmonic functions
As described in <ref>, our proof
strategy is based around
construction of Lyapunov functions for the (transformed and time-changed)
process, using an appropriate harmonic function.
For the wedge or quadrant, there is a well-known parametric family of harmonic functions available for this purpose; these functions were used to study
exit times from wedges by Burkholder <cit.> and reflecting processes in wedges by Varadhan & Williams <cit.>, and have been used subsequently by many authors (e.g. <cit.>).
These functions are most conveniently expressed in polar coordinates, so we first introduce suitable notation for that; our approach is similar to e.g. <cit.>, <cit.>, and <cit.>.
We write z = (r, θ) in polar coordinates, with angles measured relative to the
positive horizontal axis: r := r(z) := z and θ := θ (z) ∈ (-π,π]
is the angle between e_1 and z,
i.e., in the notation at (<ref>), z = r R_θ (e_1).
The Cartesian coordinates are x = r cosθ
and y = r sinθ, and the Cartesian derivatives of the polar coordinate functions are given by
D_1 r = cosθ ; D_2 r = sinθ ; D_1 θ = - sinθ/r; D_2 θ = cosθ/r.
Recall that φ_0, as defined in (<ref>),
represents the angle at the apex of the wedge = (^2). In our notation for polar coordinates,
= { (r cosθ, r sinθ ) : r ∈, 0 ≤θ≤φ_0 } .
It will be useful to consider a slightly bigger wedge; for this we use the notation
^δ := { (r cosθ, r sinθ ) : r ∈, -δ≤θ≤φ_0+δ}.
The function h will be an harmonic function defined in terms of the angle parameters β_1,β_2 ∈ (-π/2,π/2) and φ_0 ∈ (0,π). Given β_1, β_2, φ_0, set
β:= β_1+β_2/φ_0,
and define, in polar coordinates, the function h : ^2 → with parameters β_1, β_2, φ_0 by
h (z) := h ( r, θ ) := r^βcos ( βθ - β_1 ) .
The function h is infinitely differentiable on ^2 ∖{},
and its partial derivatives of all orders are continuous on ^2 ∖{}.
By elementary calculus using the derivatives (<ref>),
D_1 h(z) = β r^β -1cos ( (β-1) θ - β_1) , D_2 h(z) = - β r^β -1sin ( (β-1) θ - β_1),
and
D_1^2 h(z) = β (β-1) r^β -2cos ( (β-2) θ - β_1 ) = - D_2^2 h(z) ,
verifying that D_1^2 h + D_2^2 h ≡ 0, i.e., h is harmonic. We also observe
that (<ref>) implies that ∇ h(z) ^2 = ( D_1 h(z) )^2 + ( D_2 h(z) )^2 = β^2 r^2β-2,
from which we see that
∇ h(z) = β z^β-1, for all z ∈^2 ∖{}.
See Figure <ref> for an illustration of the relationship between the gradient ∇ h and the angles β_1, β_2 that define h through (<ref>)–(<ref>).
The following simple fact
shows that for z ∈, it holds that h (z) >0 (unless z=) and h(z) grows polynomially in z.
Fix φ_0 ∈ (0,π) and β_1,β_2 ∈ (-π/2,π/2).
With β given at (<ref>) and h given by (<ref>),
there exist δ >0 and _0 >0 for which
_0 z ^β≤ h(z) ≤z^β , for all z ∈^δ,
and for every m ∈ there exists A := A(m,β) < ∞ such that, for every ν∈{1,2}^m,
| D_ν h(z) | ≤ A z^β -m , for all z ∈^δ∖{}.
Let β_1,β_2 ∈ (-π/2, π/2) and φ_0 ∈ (0,π) be fixed,
and define β as at (<ref>).
The upper bound in (<ref>)
follows from (<ref>) and the fact that |cosθ |≤ 1 for all θ∈.
Choose δ >0 with 2 δ | β | < π/2 - max ( |β_1|, |β_2| ). If β≥ 0, then
- π/2 + δ | β | < - β_1-δβ≤βθ - β_1 ≤β_2+δβ < π/2 - δ | β | , for all θ∈ [-δ,φ_0+δ];
if β <0, then, similarly, | βθ - β_1 | < π/2 - δ | β |. Hence
inf_θ∈ [-δ,φ_0+δ]cos (βθ - β_1 ) ≥_0, where _0 := cos( π2 - δ |β| ) >0 .
This completes the proof of (<ref>). From repeated differentiation and application of (<ref>), one sees that, for | ν | = m, D_ν h(z) = r^β-m A_ν (β, β_1, z)
where sup_β_1sup_z ∈^2 A_ν (β, β_1, z) < ∞, which yields (<ref>).
For certain applications of the optional stopping theorem, it is technically
convenient to introduce a version h_b of the function h, truncated at a fixed level b ∈, defined by
h_b (z) := (2b)^β∧ h (z).
Since h(z) ≤ z ^β,
z ∈^δ, we have that h_b (z) = h(z) when z ∈ with z ≤ 2b.
§.§ Estimates for Lyapunov function increments: interior
The family of harmonic functions h
described in <ref>
provides candidate Lyapunov functions h^α, α >0;
we estimate expectations of the functional increments
h^α ( _n+1^N ) - h^α ( _n^N )
when _n^N ∈ = () (this section) and _n^N ∈ = (∪) (<ref>). (See <ref> for definitions of , , etc.)
As indicated in <ref>,
h ( _n^N ) is “almost”
a martingale, because h is harmonic and the increments of ^N_n are standardized.
The α =1 case of the main result of this section, Proposition <ref>, gives a concrete
estimate to this effect, but does not yield control over the sign of the deviation
from the martingale property, i.e., the o term in (<ref>) below.
However, taking α <1 or α >1 furnishes a super- or submartingale, respectively, each with a quantitative estimate; this is the main content of Proposition <ref> below.
See Lemma 6(a) of <cit.> and Lemma 5.2 of <cit.> for related results.
Suppose that hypotheses hyp:partial_homogeneity, hyp:moments,
hyp:zero_drift, and hyp:full_rank hold.
Let N ∈, α >0, β_1,β_2 ∈(-π/2,π/2), and define β as at (<ref>). Suppose that, for >2 the constant in hyp:moments, 2- < αβ <.
Then, for all z ∈ and all n ∈,
[ h^α(^N_n+1) - h^α(^N_n) ^N_n = z ]
=
α(α-1) β^2/2z^2β-2 h^α-2(z)
+ o(z^αβ -2),
as z →∞.
In particular, the following hold.
* If α >1, there exist constants c >0
and r ∈ such that
[ h^α(^N_n+1) - h^α(^N_n) ^N_n = z ]
≥ c z^αβ -2, for all z ∈∖ B_r.
* If α∈ (0,1), there exist constants
c >0
and r ∈ such that
[ h^α(^N_n+1) - h^α(^N_n) ^N_n = z ]
≤ - c z^αβ -2, for all z ∈∖ B_r.
Let Δ : =_1 - _0.
From (<ref>), we have that
^N_n+1 - ^N_n = _T_N(n)+1 - _T_N(n)
whenever ^N_n ∈.
Hence, for all z ∈ and all n ∈,
[ h^α (^N_n+1) - h^α (^N_n) |^N_n = z] =
[h^α (z + Δ) - h^α(z) |_0 = z].
For δ∈ (0,1) and z ∈,
define the event
E_δ (z) := {Δ≤ (1 + z )^1-δ} .
On the event E_δ(z),
control of the remainder terms of the form (<ref>)
will enable us to
estimate the expected increment in (<ref>) using
the multivariate Taylor expansion
from <ref>.
On the complementary event, E_δ^ (z),
to bound the expected increment in (<ref>)
we will instead rely
on the moments assumption hyp:moments which shows that big jumps are unlikely.
The following elementary technical lemma will be used in the latter case.
Suppose that for some ν >0, B < ∞, and z ∈,
the increment Δ satisfies
[ Δ^ν|_0 = z ] ≤ B.
Then, for every δ∈ (0,1) and every γ∈ [0, ν],
[ Δ^γE_δ^ (z)_0 = z ] ≤ B (1 + z )^-(1-δ)(ν-γ), for all z ∈.
In particular, if ν >2 and δ∈ (0, ν-2/ν-1), then for z ∈,
as z →∞,
[ ΔE^_δ(z)|_0 = z ] = o ( z ^-1 ),
and [ Δ^2 E^_δ(z)|_0 = z ] = o (1 ) .
Write Δ^γ = Δ^ν·Δ^γ - ν,
with γ - ν≤ 0, to see that
Δ^γE_δ^ (z)≤Δ^ν (1 + z )^-(1-δ)(ν-γ) ,
and then take (conditional) expectations. The last part of the lemma is obtained for γ =1 by noting that when δ∈ (0, ν-2/ν-1) it holds that (1-δ)(ν-γ)>1, and for γ =2 it suffices to note that (1-δ)(ν-γ)>0 since δ <1 and ν >2.
For a real-valued 2-dimensional matrix M, we denote by
M _op := sup_z ∈^2 ∖{} M z /z, the matrix (operator) norm induced by the Euclidean norm on ^2.
As promised, we partition the expected increment in (<ref>):
[ h^α (^N_n+1) - h^α (^N_n) |^N_n = z]
=
[ ( h^α (z + Δ) - h^α(z) ) E_δ (z)_0 = z ]
+ [ ( h^α (z + Δ) - h^α(z) ) E_δ^ (z)_0 = z ],
where we fix δ∈ (0,-2/-1) and E_δ (z) is defined at (<ref>).
To estimate the first term on the right-hand side of (<ref>), we
take f = h^α in (<ref>)
and use the Hessian chain rule at (<ref>) to obtain, for all z ∈
and all y ∈^2 with y < z,
h^α (z+ y) - h^α (z)
= α h^α -1(z) y^∇ h(z)
+ α/2 h^α -1 (z) y^ H_h (z) y
+α (α-1)/2 h^α-2(z) (y^∇ h(z) )^2
+ _3^h^α (z, y).
By (<ref>) together with the upper
bounds on h and its derivatives in (<ref>) and (<ref>),
the error term _3^h^α in (<ref>) satisfies the bound
| _3^h^α (z,y) | ≤ C_δ z ^αβ -3· y ^3
,
for some constant C_δ < ∞ and all z, z+y ∈.
We will apply (<ref>) with
y = Δ, on the event E_δ (z), and then take (conditional) expectations.
First note that, by (<ref>) and (<ref>),
[| _3^h^α (z,Δ) | E_δ(z)_0 = z ]
≤ C_δ (1+ z )^αβ -2 -δ [ Δ^2 E_δ(z)|_0 = z ]
= o ( z ^αβ -2 ),
for z ∈ with z→∞,
by the fact that hyp:moments holds with > 2.
Next, observe that, by linearity,
[ ( Δ^∇ h (z) ) E_δ(z)_0 = z ]
=( ∇ h(z))^ [ Δ|_0 = z ] - ( ∇ h(z))^ [ ΔE^_δ(z)|_0 = z ].
Hence, by hyp:moments and the γ =1, ν = >2 case (<ref>),
together with (<ref>), the zero drift condition in hyp:zero_drift, and (<ref>), we obtain
| [ ( Δ^∇ h (z) ) E_δ(z)_0 = z ] | = o ( z^β-2),
for z ∈ with z→∞.
Moreover, by linearity
of the trace
and (<ref>), we obtain
[ ( Δ^ H_h (z) Δ) E_δ(z)_0 = z ]
= [ ( ΔΔ^E_δ(z) H_h (z) ) _0 = z ]
= ( [ ΔΔ^E_δ(z)_0 = z ] H_h (z) )
= ( [ ΔΔ^ - ΔΔ^E^c_δ(z)_0 = z ] H_h (z) )
= H_h (z) - ( [ ΔΔ^E^_δ(z)_0 = z ] H_h (z) ).
Here, since h is harmonic (see <ref>), H_h (z) =0, while,
by hyp:moments, the γ = 2, ν = >2 case (<ref>),
and (<ref>),
| ( [ ΔΔ^E^_δ(z)_0 = z ] H_h (z) ) | ≤ H_h (z) _op[ Δ^2 E^_δ(z)_0 = z ] = o ( z ^β -2 ),
as z →∞. Hence we conclude that
[ ( Δ^ H_h (z) Δ) E_δ(z)_0 = z ]
=o ( z ^β -2 ), for z ∈ with z→∞.
Similarly,
[ ( Δ^∇ h(z) )^2 E_δ(z)_0 = z ]
= [ ( Δ^ ( ∇ h(z) ) ( ∇ h(z) )^ΔE_δ(z)) _0 = z ]
= ( [ ΔΔ^ (1-E^_δ(z)) _0 = z ] ( ∇ h(z) ) ( ∇ h(z) )^) ,
while ( ( ∇ h(z) ) ( ∇ h(z) )^) = ∇ h(z) ^2, and,
by Lemma <ref> and (<ref>),
| ( [ ΔΔ^E^_δ(z)_0 = z ] ( ∇ h(z) ) ( ∇ h(z) )^) | ≤∇ h(z) ^2 [ Δ^2 E^_δ(z)_0 = z ]
= o ( z ^2β -2 ), as z→∞.
Hence, using (<ref>) and (<ref>), we conclude that
[ ( Δ^∇ h(z) )^2 E_δ(z)_0 = z ]
= ( β^2 +o(1) ) z ^2β-2 , for z ∈ with z→∞.
Using (<ref>), (<ref>), (<ref>),
and (<ref>) in (<ref>),
we obtain the interior Taylor expansion
[ ( h^α (z + Δ) - h^α(z) ) E_δ (z)_0 = z ]
= α(α-1) β^2/2z^2β-2 h^α-2(z)
+ o(z^αβ -2), for z ∈ with z→∞.
On the other hand, we consider the case when E_δ^ (z) occurs.
Write x^+ := x x >0 and x^-:= - x x < 0.
We
use the triangle inequality, the fact that | h(z)| ≤ z ^β from (<ref>), and (<ref>), to obtain the elementary bound
| h^α (z + Δ) - h^α(z) | E^_δ (z) ≤( z + Δ^(αβ)^+ + z ^(αβ)^+) E^_δ (z)
≤ C_1 z ^(αβ)^+E^_δ (z) + C_2 Δ^(αβ)^+E^_δ (z)
≤ C_3 Δ^(αβ)^+/(1-δ)E^_δ (z), for all z ∈,
where C_1, C_2, C_3 < ∞ depend only on αβ.
If (αβ)^+ < and
δ < 1 -(αβ)^+/, we can apply the ν = >2
case of Lemma <ref>, with γ = (αβ)^+/(1-δ) to deduce from (<ref>) that
[ | h^α (z + Δ) - h^α(z) | E_δ^ (z)_0 = z ]
≤ C [ Δ ^(αβ)^+/(1-δ)E_δ^ (z)_0 = z ]
= O( z ^(αβ)^+-(1-δ) ).
Provided that
δ < - 2 - (αβ)^-/,
the bound in (<ref>)
is o(z^αβ -2), and so
| [ ( h^α (z + Δ) - h^α(z) ) E_δ^ (z)_0 = z ] | = o(z^αβ -2),
for z ∈ with z→∞.
For αβ≥ 0, the additional constraint on δ∈ (0 , -2/-1) is δ < - 2/,
for which there is an admissible δ∈ (0,1) whenever >2,
while if αβ <0, the constraint is δ < - 2 +αβ/,
which requires αβ > 2 -.
Thus whenever 2- < αβ <, we can choose a suitable δ∈ (0,1)
for which both (<ref>) and (<ref>) hold, and from here (<ref>) follows.
Then (<ref>) together with the lower
bound in Lemma <ref>
yields (<ref>) and (<ref>).
The following variant of Proposition <ref>
applies to the truncated Lyapunov function defined using h_b from (<ref>).
Suppose that hypotheses hyp:partial_homogeneity, hyp:moments,
hyp:zero_drift, and hyp:full_rank hold.
Let N ∈, α >0, β_1,β_2 ∈(-π/2,π/2), and define β as at (<ref>). Suppose that, for >2 the constant in hyp:moments, 2 - < αβ <.
Then the following hold.
* If α >1, there exist constants c >0
and r ∈ such that, for all b ≥ r,
[ h_b^α(^N_n+1) - h_b^α(^N_n) ^N_n = z ]
≥ c z^αβ -2, for all z ∈∩ ( B_b ∖ B_r ).
* If α∈ (0,1), there exist constants
c >0
and r ∈ such that, for all b ≥ r,
[ h_b^α(^N_n+1) - h_b^α(^N_n) ^N_n = z ]
≤ - c z^αβ -2, for all z ∈∩ ( B_b ∖ B_r ).
Take δ∈ (0,-2/-1), and recall the definition of E_δ (z) from (<ref>). Then,
[ h_b^α (^N_n+1) - h_b^α (^N_n) |^N_n = z]
=
[ ( h_b^α (z + Δ) - h_b^α(z) ) E_δ (z)_0 = z ]
+ [ ( h_b^α (z + Δ) - h_b^α(z) ) E_δ^ (z)_0 = z ],
similarly to (<ref>).
There exists r_0 (depending only on δ)
such that whenever z ∈∖ B_r_0,
on the event E_δ (z) we have z + Δ≤ 2 z.
Hence, if z ∈∩ ( B_b ∖ B_r ) for b ≥ r ≥ r_0,
[ ( h_b^α (z + Δ) - h_b^α(z) ) E_δ (z)_0 = z ]
= [ ( h^α (z + Δ) - h^α(z) ) E_δ (z)_0 = z ],
and then (<ref>) applies;
the o( · ) term on the right-hand side of (<ref>) means that
we can take r large enough so that the sign of the increment is controlled for all z ∈∩ ( B_b ∖ B_r ), independently of b.
On the other hand, when E_δ^ (z) occurs, similarly to (<ref>),
| h_b^α (z + Δ) - h_b^α(z) | E^_δ (z)≤ C Δ^(αβ)^+/(1-δ)E^_δ (z),
where constant C does not depend on b, and the argument leading to (<ref>)
applies verbatim. The proof is concluded in the same way as the proof
of Proposition <ref>.
§.§ Estimates for Lyapunov function increments: boundary
This section presents Lyapunov function estimates at the boundaries, to accompany those in the interior from the previous section;
here the time compression factor N (see S<ref>)
is crucial, at least when R > 1.
In the case of a simple boundary (R=1),
Proposition <ref> is essentially
Lemma 6(b) in <cit.>.
Recall the definitions of the angles φ_0, φ_1, and φ_2 given by (<ref>), (<ref>), and (<ref>), respectively.
Suppose that hypotheses hyp:partial_homogeneity, hyp:moments,
and hyp:full_rank hold.
Let α >0, β_1,β_2 ∈(-π/2,π/2), and define β as at (<ref>). Suppose that, for ν > 1 the constant in hyp:moments, 1 - ν < αβ < ν.
Then, for each k ∈{1,2}
and all N ∈,
there exist λ_k^N : _k → (0,∞) and ^N_k : _k → for which,
[ h^α (^N_n+1) - h^α (^N_n) ^N_n = z ]
= αβz^β-1 h^α-1 (z) λ^N_k (z) [ sin (β_k - φ_k) + ^N_k ( z) ], for all z ∈_k,
where (i) for all >0 there exist N ∈ and r ∈ such that | ^N_k (z) | ≤
for all z ≥ r; and (ii) for each N ∈, inf_z ∈_kλ^N_k (z) >0
and sup_z ∈_kλ^N_k (z) < ∞.
The following consequence of Proposition <ref>
furnishes, once again, quantitative sub- and supermartingale estimates.
Denote x := x>0-x<0.
Suppose that hypotheses hyp:partial_homogeneity, hyp:moments,
and hyp:full_rank hold. Suppose that χ defined by (<ref>) and α > 0
satisfy χ≠ 0 and
1 - ν < αχ < ν.
Choose ∈ (0, | χ |) for which α < ν - αχ
and α < ν - 1 + αχ.
Then there exist β_1, β_2 ∈ (-π/2,π/2), N ∈, r ∈, and c > 0 such that
β given by (<ref>) satisfies β = χ,
| χ| < | β| < | χ | +,
and
[ h^α (^N_n+1) - h^α (^N_n) ^N_n = z ]
≥ c z ^αβ -1, for all z ∈∖ B_r.
Moreover,
there exist (different from above) β_1, β_2 ∈ (-π/2,π/2), N ∈, r ∈, and c > 0 such that β = χ, | χ | - < | β | < | χ |,
and
[ h^α (^N_n+1) - h^α (^N_n) ^N_n = z ]
≤ -c z ^αβ -1, for all z ∈∖ B_r.
For ease of notation, set
Δ_N := ^N_1 - ^N_0 and z' := ^-1 (z),
and note that Δ_N = ( Z_N) - (Z_0) when Z_0 ∈ by (<ref>)–(<ref>).
Then, with the definition of d^N_k (z) from (<ref>) and setting λ_k^N (z) := _z ∑_ℓ=0^N-1 Z_ℓ∈_k,
we can write
[ Δ_N |^N_0 = z ] = [ Z_N - Z_0 |_0 = z ]
= λ_k^N(z') · d^N_k ( z' ) , for all z ∈_k.
By hypothesis hyp:moments,
there exist constants B < ∞ and ν >1 for which [ _n+1 - _n ^ν|_n =z ] ≤ B, for all z ∈ and all n ∈ and hence, by the simple inequality ( ∑_i=1^N |s_i| )^ν≤ N^νmax_1 ≤ i ≤ N |s_i|^ν≤ N^ν∑_i=1^N | s_i|^ν, we have
sup_z ∈ [ Δ_N ^ν|^N_0 = z ] ≤ B_N ,
for some constant B_N < ∞ that depends on N and ν.
Analogously to (<ref>), define
E_N,δ (z) := {Δ_N ≤ (1 + z )^1-δ}.
Fix δ∈ (0,ν-1-(αβ)^-/ν). For z ∈, we then have
[ h^α (^N_n+1) - h^α (^N_n) |^N_n = z]
=
[ ( h^α (z + Δ_N) - h^α(z) ) E_N,δ (z)_0 = z ]
+ [ ( h^α (z + Δ_N) - h^α(z) ) E_N,δ^ (z)_0 = z ].
The first term on the right-hand side of (<ref>)
we will estimate using a Taylor expansion;
the stabilization property of the drift in (<ref>)
will allow us to fix N (large). For the second term on the right-hand side of (<ref>) we will use
Lemma <ref>, which is applicable (for fixed N) because of the bound (<ref>).
Indeed, similarly to (<ref>)–(<ref>),
we obtain that
the second term on the right-hand side of (<ref>)
satisfies
[ | h^α (z + Δ_N) - h^α(z) | E_N,δ^ (z)_0 = z ] =o ( z ^αβ -1 ), as z →∞,
provided that 1 - ν < αβ < ν and δ < ν-1-(αβ)^-/ν.
We apply the first-order Taylor expansion (<ref>)
with f = h^α and y=Δ_N to obtain
( h^α (z + Δ_N) - h^α (z) ) E_N,δ (z)
= α h^α-1 ( z ) Δ_N^∇ h (z ) E_N,δ (z) + ^h^α_2 (z, Δ_N ) E_N,δ (z) ,
where, by (<ref>) and the upper
bounds on h and its derivatives in (<ref>) and (<ref>),
[ | ^h^α_2 (z, Δ_N) | E_N,δ (z) _0 = z ] ≤ C_δ z ^αβ -1 -δ [ Δ_N |_0 = z ],
where [ Δ_N |_0 = z ] is uniformly bounded, since hyp:moments holds for ν >1.
Hence we obtain, for the first term on the right-hand side of (<ref>), for all z ∈_k,
[ ( h^α (z + Δ_N) - h^α(z) ) E_N,δ (z)_0 = z ]
= α h^α-1 ( z ) [ ( Δ_N^∇ h (z ) ) E_N,δ (z)_0 = z ] + o ( z^αβ -1 ) ,
as z →∞.
Moreover, from Lemma <ref> and (<ref>), we have that, as z→∞,
[ | Δ_N^∇ h (z ) | E^_N,δ (z)_0 = z ]
≤∇ h (z) [ Δ_N E^_N,δ (z)_0 = z ]
= o ( z ^β -1 ).
Then
using (<ref>),
together with (<ref>), (<ref>), and (<ref>), we obtain, for all z ∈_k,
[ h^α (z + Δ_N) - h^α(z) _0 = z ]
= αλ_k^N (z') h^α-1 ( z ) ( d^N_k (z') )^∇ h ( z ) + o ( z^αβ -1 ), as z →∞.
We next use the stabilization result of <ref> to estimate the term on the right-hand side of (<ref>) involving
the inner product of the N-step drift
with the gradient ∇ h.
Let v_1 and v_2 be as defined in (<ref>),
and the corresponding angles φ_1 and φ_2 relative to the
appropriate normal vectors be as given by (<ref>)
and (<ref>).
Recalling the computations for ∇ h from (<ref>) and (<ref>),
we obtain
from (<ref>) that
v_1^∇ h(z)/β z ^β -1 = - sin( (β-1) θ -β_1 + φ_1 ) ,
v_2^∇ h(z)/β z ^β -1 = sin( (β-1) θ -β_1 + φ_0 - φ_2 ) ,
for all z ∈^2 ∖{}.
For z ∈, represented in polar coordinates by (r,θ), we have
θ → 0 as z →∞ with z ∈;
θ →φ_0 as z →∞ with z ∈.
From (<ref>) and (<ref>) together with (<ref>) and (<ref>), respectively, it follows that
lim_ z→∞, z ∈_k v_k^∇ h( z)/β z^β - 1
= sin(β_k - φ_k), for k ∈{1,2},
where for the case k=2 we have used the fact that βφ_0 - β_1 = β_2, by (<ref>).
Let >0. By Proposition <ref>, we can choose N ∈ large enough so that
d^N_k (z') - _k < _k for all z' ∈_k with z' > 2RN.
Then, by (<ref>),
| ( d^N_k (z') )^∇ h ( z ) - v_k^∇ h ( z ) | ≤, for all z' ∈_k with z' > 2RN.
Together with (<ref>), this means that we can choose large enough r > 2RN so that
| ( d^N_k (z') )^∇ h (z ) - β z ^β -1sin (β_k - φ_k ) | ≤ z ^β -1, for all z ∈_k with z > r.
Using this in (<ref>) yields (<ref>)
together with the property (i) for _k^N claimed in the proposition.
For fixed N,
we can choose r large enough so that for all z ∈ with z > r, the term λ_1^N (z)
depends only on e_2^ z, which takes values in the finite set R. Thus
we get the properties claimed in (ii) for k=1; the case k=2 is similar.
Suppose that χ≠ 0, α >0 and ν>1 satisfy
1 - ν < αχ < ν.
First consider the case where χ > 0.
For >0 as specified in the corollary,
pick β_1,β_2 ∈ (-π/2,π/2) such that φ_k - /2φ_0 < β_k <φ_k for both k∈{1,2}; hence, by (<ref>) and (<ref>),
it holds that 0 < χ - < β < χ. In particular, β = χ.
Moreover, 0 < αβ < αχ < ν,
so that the hypotheses of Proposition <ref> are satisfied.
By statement (i) in Proposition <ref>, we can choose N ∈ and r ∈
such that, for each k ∈{1,2},
sin (β_k - φ_k ) + _k^N (z) < -
for all z ∈∖ B_r. Since β >0,
we verify (<ref>) from (<ref>).
Similarly, if we pick β_1,β_2 ∈(-π/2,π/2) such that φ_k < β_k <φ_k + /2φ_0 for both k, then
0 < χ < β < χ +.
Again, β = χ, and now
0 < αβ < αχ + α < ν, by choice of .
By
Proposition <ref>, we can choose N ∈ and r ∈
such that, for each k ∈{1,2},
sin (β_k - φ_k ) + _k^N (z) >
for all z ∈∖ B_r. Since β >0,
we verify (<ref>) from (<ref>).
On the other hand, consider χ < 0.
Pick β_1,β_2 ∈ (-π/2,π/2) such that φ_k < β_k <φ_k + /2φ_0. Now χ < β < χ + < 0,
and αβ > αχ > 1 - ν, so, again, the hypotheses of Proposition <ref> hold. By Proposition <ref>, we can choose N ∈ and r ∈
such that, for each k ∈{1,2},
sin (β_k - φ_k ) + _k^N (z) >
for all z ∈∖ B_r. Since β < 0,
we verify (<ref>) from (<ref>).
Similarly, if we pick β_1,β_2 ∈(-π/2,π/2) such that φ_k - /2φ_0 < β_k <φ_k for both k, then χ - < β < χ < 0, while
αβ > αχ - α > 1 - ν, by choice of .
By Proposition <ref>, we can choose N ∈ and r ∈
such that, for k ∈{1,2},
sin (β_k - φ_k ) + _k^N (z) < -
for all z ∈∖ B_r. Since β < 0,
we verify (<ref>) from (<ref>).
In the same way as a minor adjustment of the proof of Proposition <ref>
leads to its analogue for the truncated Lyapunov function defined using h_b from (<ref>),
Proposition <ref>,
a small modification of the proof of Corollary <ref> leads to the following.
Suppose that hyp:partial_homogeneity, hyp:moments,
and hyp:full_rank hold. Suppose that χ defined by (<ref>) and α > 0
satisfy χ≠ 0 and
1 - ν < αχ < ν.
Choose ∈ (0, | χ |) for which α < ν - αχ
and α < ν - 1 + αχ.
Then there exist β_1, β_2 ∈ (-π/2,π/2), N ∈, r ∈, and c > 0 such that
β given by (<ref>) satisfies β = χ,
| χ| < | β| < | χ | +,
and, for all b ≥ r,
[ h_b^α (^N_n+1) - h_b^α (^N_n) ^N_n = z ] ≥ c z ^αβ -1, for all z ∈∩ ( B_b ∖ B_r ).
Moreover,
there exist (different from above) β_1, β_2 ∈ (-π/2,π/2), N ∈, r ∈, and c > 0 such that β = χ, | χ | - < | β | < | χ |,
and, for all b ≥ r,
[ h_b^α (^N_n+1) - h_b^α (^N_n) ^N_n = z ]
≤ -c z ^αβ -1, for all z ∈∩ ( B_b ∖ B_r ).
§.§ Lower bounds on the extent of an excursion
An important ingredient in establishing lower tail bounds on passage times is the following
lower tail bound on the extent of an excursion, i.e., the probability that the trajectory reaches
a large distance from the origin before returning to a bounded set. Such an estimate yields
a corresponding tail bound on the return time, since trajectories of the process are essentially diffusive over large scales. The technical result we will apply is Theorem <ref> below;
see <ref> for further remarks and references.
Suppose that hyp:partial_homogeneity, hyp:moments,
and hyp:full_rank hold, that ν from hyp:moments
satisfies ν > 2, and that χ defined by (<ref>)
satisfies 0 < χ < ν.
Let r > 0. Then there exists r_0 ∈ (r,∞) such that,
for every >0, there is a c_0 >0 (depending on , r, and β) such that
_z ( max_0 ≤ n ≤τ_Z (r) Z_n ≥ s ) ≥ c_0 s^-χ-,
for all z ∈∖ B_r_0 and all s ≥ 1.
Suppose that 0<χ<ν. Let α∈ (1, ν/χ), and then pick ∈ (0, ν/α - χ);
such choices of and α are possible since we assumed that χ < ν.
Corollary <ref> shows that there exist
β_1, β_2 ∈ (-π/2,π/2), N ∈, and r_1 ∈ such that
β given by (<ref>) satisfies χ < β < χ + < ν/α,
and, for all b ≥ r_1,
[ h_b^α (^N_n+1) - h_b^α (^N_n) ^N_n = z ] ≥ 0, for all z ∈∩ ( B_b ∖ B_r_1 ).
Moreover, Proposition <ref>, together with the fact that α > 1
and 0 < αβ < ν≤, shows that there exists r_2 ∈
such that, for all b ≥ r_2,
[ h_b^α (^N_n+1) - h_b^α (^N_n) ^N_n = z ] ≥ 0, for all z ∈∩ ( B_b ∖ B_r_2 ).
Combining (<ref>) and (<ref>), we conclude that for every α∈ (1, ν/χ), there exist N ∈ and r_3 > (R√(2)) ∨ r_1 ∨ r_2 such that, for all b ≥ r ≥ r_3,
[ h_b^α (^N_n+1) - h_b^α (^N_n) ^N_n = z ] ≥ 0, for all z ∈∩ ( B_b ∖ B_r ).
For simplicity of notation, write ξ_n := h_b^α (^N_n).
Define σ_b := inf{ n ∈ : ^N_n ≥ b }
and τ_r := inf{ n ∈ : ^N_n ≤ r }.
From the definition of h_b in (<ref>) together with (<ref>), for every b ≥ r
with r ≥ r_3, it holds that
ξ_n ∧τ_r ∧σ_b
is a non-negative submartingale, uniformly bounded by (2 b)^αβ.
Hence an application of the optional stopping theorem (e.g. <cit.>) at the (a.s.-finite)
stopping time τ_r ∧σ_b
shows that, for z ∈∩ ( B_b ∖ B_r ),
h^α (z) = [ ξ_0 |^N_0 = z] ≤ [ ξ_τ_r ∧σ_b|^N_0 = z ]
≤ r^αβ ( τ_r < σ_b |^N_0 = z )
+ (2 b)^αβ ( σ_b < τ_r |^N_0 = z ) ,
using the fact that | h(z) | ≤ z ^β so sup_z ∈ B_r h^α (z) ≤ r^αβ. Thus, from the lower bound in (<ref>) and the fact that α >1, we obtain, for all b ≥ r ≥ r_3,
( σ_b < τ_r |^N_0 = z ) ≥_0 z ^αβ - r^αβ/(2b)^αβ - r^αβ , for all z ∈∖ B_r .
There exists r_0 >0 (depending on r, Σ, β, and _0, but not )
such that z ≥ r_0 implies (i) (z) ≥ r, and (ii) _0 z ^αβ - r^αβ≥ 1, say; hence for every r ≥ r_0 and all b ≥ r,
_z ( max_0 ≤ n ≤τ_r^N_n ≥ b ) ≥_z( σ_b < τ_r ) ≥c/b^αβ , for all z ∈∖ B_r_0 ,
where c >0 depends on α, β (and hence ) but not r or b.
A translation from ^N back to Z (similarly to Lemma <ref>)
yields the statement in the lemma, noting that αβ < αχ + α can be chosen to be arbitrarily close to χ by choice of α >1 and then >0.
§.§ Completing the proofs
Suppose that χ as defined at (<ref>) satisfies χ≠ 0,
and let ν >1 be the constant in hypothesis hyp:moments.
Let α∈ (0,1) with αχ < ν.
Combining Proposition <ref>
and Corollary <ref>,
we deduce that for every >0 sufficiently small, there exist N ∈, r ∈, c >0, and
β_1, β_2 ∈ (-π/2,π/2)
such that β given by (<ref>)
satisfies β = χ and | χ | < | β | < | χ | +, and there holds the supermartingale estimate
[ h^α (^N_n+1) - h^α (^N_n) ^N_n = z ]
≤ -c z ^αβ -2, for all z ∈∖ B_r.
In particular, if χ >0 then β >0, and so h (z) > 0 for all z ∈∖{} and
lim_r →∞inf{ h(z)
z ∈∖ B_r } = ∞,
and then an application of Theorem <ref><ref> implies that ^N_n
is recurrent,
and hence that Z_n is recurrent, by Lemma <ref>. On the other hand, if χ <0 then β <0, and the h
for which (<ref>)
holds satisfies h (z) > 0 for all z ∈∖{} and
lim_r →∞inf{ h(z)
z ∈∖ B_r } = 0,
and Theorem <ref><ref> implies that ^N_n
is transient,
and hence Z_n is transient, by Lemma <ref>.
This completes the proof of part <ref> of the theorem.
For part <ref>, we again use the supermartingale estimate provided by (<ref>).
Suppose χ > 0, so that β >0.
From Lemma <ref>, we get from (<ref>) that
[ h^α (^N_n+1) - h^α (^N_n) ^N_n = z ]
≤ -c ( h^α (z) )^1- 2/αβ, for all z ∈∖ B_r.
An application of Theorem <ref>
with Λ = B_r, f = h^α, and η = 1- 2/αβ < 1 shows that
[ τ_Z(r)^γ ] < ∞
for all γ < αβ/2.
Here α, β were arbitrary subject to the constraints α∈ (0, 1 ∧ν/χ)
and β∈ (0,χ), so we may conclude that [ τ_Z(r)^γ ] < ∞
for all γ < χ∧ν/2. Markov's inequality then completes the proof of
part <ref> of the theorem.
Finally, we prove part <ref>.
Suppose that χ >0, and that hyp:moments holds with ν > 2 ∨χ.
The α = 1/β case of Proposition <ref> (and an argument
similar to that in the proof of Corollary <ref>) shows that we
may choose β_1,β_2 ∈ (-π/2,π/2) such that β > 0 and, for all r >0 large enough,
[ h^1/β (^N_n+1) - h^1/β ( ^N_n) ^N_n = z ]
≥ 0, for all z ∈∖ B_r .
We have also from Proposition <ref> that
for some c_β∈ (one may choose c_β >0 if β < 1),
[ h^1/β (^N_n+1) - h^1/β ( ^N_n) ^N_n = z ]
≥c_β/ z , for all z ∈∖ B_r .
The estimates (<ref>) and (<ref>) together with
Lemma <ref> imply that, for some c >0 (depending on β), and all r > √(2)R large enough,
[ h^1/β (^N_n+1) - h^1/β ( ^N_n) ^N_n = z ]
≥ - c/h^1/β (z), for all z ∈∖ B_r .
Next, recall the definition of E_N,δ from (<ref>). From (<ref>)
together with (<ref>) and the upper
bounds on h and its derivatives in (<ref>) and (<ref>),
we have that
| h^α (z + Δ_N) - h^α (z) | E_N,δ (z) ≤ C ( 1+ z )^αβ - 1 (1+ Δ_N ) ,
for some constant C < ∞ and all z ∈.
On the other hand, similarly to (<ref>),
| h^α (z + Δ_N) - h^α(z) | E^_N,δ (z)≤ C Δ_N ^(αβ)^+/(1-δ).
Choose α = 1/β. Then from the last two bounds, for a sufficiently small δ>0 we obtain
[ ( h^1/β (^N_n+1) - h^1/β ( ^N_n) )^2 ^N_n = z ] ≤ C [ (1+ Δ_N )^2 ^N_0 = z ] ,
which is bounded uniformly for z ∈, provided hyp:moments holds with ν≥ 2.
This fact, together with (<ref>), shows that f = h^1/β satisfies the conditions of Theorem <ref> with Λ = B_r, from which it follows that, for some C ∈ (0,∞),
( _Z(r) ≥ n ) ≥1/2( max_0 ≤ m ≤_Z(r) h^1/β ( ^N_m ) ≥ C n^1/2) , for all n ∈.
From Lemma <ref> and a comparison between hitting times for ^N and Z, as in Lemma <ref>,
( τ_Z(r) ≥ n ) ≥1/2( max_0 ≤ m ≤τ_Z(r') Z_n ≥ C' n^1/2) , for all n ∈,
and then Proposition <ref> (which is applicable since ν > 2 ∨χ)
completes the proof.
§ EXAMPLES AND APPLICATIONS
§.§ The left-continuous case
In this subsection, we consider the case where we impose the additional condition
(x,y) = 0 , unless x ≥ -1, y ≥ -1;
this left-continuity assumption
can facilitate analysis (see e.g. <cit.> for a recent example).
When R>1, left-continuity is stronger than the bound (<ref>) which is implied by partial homogeneity alone. Note that (<ref>) imposes a constraint only in (and not ).
The assumption (<ref>) leads to a significant simplification. Indeed, we will demonstrate that the probability measures π_1, π_2 on R defined in Proposition <ref><ref> can be computed by solving an explicit linear system.
Indeed, in general the π_k are defined through the invariant measure of -valued Markov chains, and can be represented through finite chains via embedding (see Remark <ref>) but these embedded are usually described only implicitly.
With q_1, q_2 defined at (<ref>) and (<ref>), define _1, _2 : R×R→ [0,1] by
_k ( i, j) = q_k (i, j) if j ∈R-1,
∑_ℓ∈R-1 q_k (i, ℓ) if j = R-1.
Then _1 is the transition function of an irreducible Markov chain on the finite state space R
which depends only on finitely many values of the basic model data, namely ∑_x ∈ p_1 (i; (x,j))
with i, j ∈R; similarly for _2.
Under the condition (<ref>),
for each k ∈{1,2}, the distribution π_k on R defined in
Proposition <ref><ref> is the unique stationary distribution corresponding to _k.
We give an algebraic proof of Lemma <ref>
at the end of this subsection;
a probabilistic explanation goes via the embedded chain described in Remark <ref>.
Under the additional assumption (<ref>),
as well as a couple of further inessential assumptions, including aperiodicity,
Theorem <ref> was obtained by Menshikov & Petritis (MP) in <cit.>.
The model of <cit.> is formulated a little differently, however, and some translation is needed, as we explain. MP consider a random walk W = (W_n, n ∈) on the quadrant ^2
in which the walker also carries an internal state
with values in the finite set A := {0,…,L}, L ∈;
the internal state is 0 unless the walker is at the boundary,
in which case its internal state can be “excited”.
The model can be represented as a Markov chain on state space
:= ∪∪∪, where
:= ^2 ×{ 0 }, := ×{ 0 }× A, := { 0}×× A, := {}× A .
Thus (w_1, w_2, α ) ∈
is either of the form (w_1, w_2, 0)
for w_1 w_2 ≥ 1 (in the interior) or
(w_1, w_2, α)
for α∈ A and w_1 w_2 = 0 (at the boundaries).
Partial homogeneity is assumed in the sense that, given W_n = w,
the law of W_n+1 - W_n depends only on which region , , ,
contains w.
Furthermore, it is assumed in <cit.> that the increments of each of the two spatial coordinates take values in {-1}∪;
MP call this assumption “lower boundedness”, and we will see that it
translates to the left-continuity assumption (<ref>).
We show how, with a slight change of variables, the model of <cit.>
fits into our framework. We define the map Υ : →^2
by setting
Υ (w_1,w_2,α) :=
(L+w_1,L+w_2) if w_1 w_2 ≥ 1;
(L - α,L+w_2) if w_1 =0, w_2 ≥ 1;
(L+w_1 ,L - α) if w_1 ≥ 1, w_2 =0;
(L-α, L-α) if w_1 = w_2 = 0.
Then Z = (Z_n, n ∈) defined by Z_n := Υ (W_n) is a Markov chain on := ^2,
and it satisfies the partial homogeneity condition hyp:partial_homogeneity
with R = L and the identification of regions
= Υ ( ), = Υ (),
= Υ (), and Υ () ⊆ = L×L (only the “diagonal” states in the corner are accessible). If W is irreducible, then Υ (W) is irreducible only on Υ () ⊆, rather than all of (due to the restriction on corner states), but nevertheless hypothesis hyp:irreducible is satisfied.
Consider first j ∈R-1. From (<ref>) it follows that
q_k (i,j) = 0 if i ∈R and i - j > 1, and so
∑_i ∈Rπ_k (i) _k ( i,j) = ∑_i ∈π_k (i) q_k (i, j) = π_k (j) ,
by the invariance of π_k
as at (<ref>).
Similarly, for j = R -1 ∈R,
∑_i ∈Rπ_k (i) _k ( i, R-1) =
∑_i ∈Rπ_k (i) ∑_ℓ∈R-1 q_k (i, ℓ)
= ∑_i ∈Rπ_k (i)[ 1 - ∑_ℓ∈R-1 q_k (i,ℓ) ]
= 1 - ∑_ℓ∈R-1∑_i ∈π_k (i) q_k (i,ℓ)
= 1 - π_k ( R-1 ) = π_k (R-1) ,
as required.
§.§ Multidimensional Lindley processes and related walks
In this section, we derive Theorem <ref> as
a consequence of our main result, Theorem <ref>. Indeed, the special nature of the reflections for the two processes M and L means that the result takes a particularly simple form, in which π plays no apparent role; this is due to the following
fact.
For both the Lindley random walk
and the mirror-reflected walk,
the boundary drifts
_k have _k ≠ 0, k ∈{1,2}, and satisfy _1 = _1 e_2 and _2 = _2 e_1.
First consider the mirror-reflected walk.
Take z = (x,y) ∈. We consider
μ_1 (y) = | z + ζ | - z .
Since x ≥ R, x + e_1^ζ >0, a.s., so e_1^μ_1 (y) = e_1^ζ = 0. On the other hand, suppose r ∈{1,2,…, R-1} is such that ( e_2^ζ = - r ) > 0 (some such r must exist, since e_2^ζ =0 and Σ is positive definite). Then r-1 ∈R and
e_2^μ_1 ( r -1) > 0.
Thus _1 is a weighted sum of vectors all of which have 0 component in the e_1 direction, non-negative components in the e_2 direction, and at least one strictly positive component in the e_2 direction. Hence _1 = _1 e_2. An analogous argument holds for _2.
For the Lindley walk, for z = (x,y) ∈ one instead considers
μ_1 (y) = [ ( z + ζ )^+ ] - z ,
to which a very similar argument applies.
Suppose that hyp:lindley-increments holds, and that Z is either the Lindley walk L or the mirror-reflected walk M. For the constant R from
the condition ( e_k^ζ≥ -R) =1 in hyp:lindley-increments, define the regions , , as in <ref>.
For z ∈, we then have ( z + ζ∈ ) = 1, meaning that ( Z_1 - Z_0 = y | Z_0 = z) = ( ζ = y) for all z ∈, y ∈^2.
In particular, it follows from the assumptions that ζ = and positive-definiteness of Σ from hyp:lindley-increments that hyp:zero_drift and hyp:full_rank are satisfied.
Moreover, since both | z+ ζ| - z ≤ζ and (z+ ζ)^+ - z≤ζ for all z, ζ∈^2, we have, for any ν >0,
[ Z_1 - Z_0 ^ν| Z_0 = z] ≤ [ ζ^ν ], for all z ∈.
The condition hyp:irreducible is a consequence of the
assumed irreducibility
together with the fact that ζ = and positive-definiteness of Σ, which imply uniform ellipticity in (cf. Remark <ref><ref>).
We show that the partial homogeneity hypothesis hyp:partial_homogeneity is satisfied. To do so, consider the corresponding transition laws (in particular, started from ): see Figure <ref> for an illustration of the reflection mechanism in each case. One verifies that the mirror-reflected walk satisfies hyp:partial_homogeneity, with R the constant inherited from hyp:lindley-increments,
and with , p_1, p_2 defined for z=(z_1,z_2) ∈ by
(z) = ( ζ = z ) ;
p_1 (y ; z) = ( ζ = z ) y + z_2 ≥ 0 + ( ζ = ( z_1 , - 2y - z_2) ) y + z_2 > 0 ;
p_2 (x ; z) = ( ζ = z ) x + z_1 ≥ 0 + ( ζ = ( -2x - z_1 , z_2) ) x + z_1 > 0 ;
see the left-hand panel in Figure <ref>.
This shows that Z = M satisfies hyp:partial_homogeneity, and then (<ref>) together with the assumption [ ζ^ν ] < ∞ from hyp:lindley-increments shows that hyp:moments holds.
For the Lindley walk, the considerations are similar, but now we verify that hyp:partial_homogeneity holds with the identification of p_1, p_2 through
p_1 (y ; z) = ( ζ = z ) y + z_2 ≥ 0 + ∑_w ∈ y + w < 0 ( ζ = ( z_1 , w) ) y + z_2 = 0 ;
p_2 (x ; z) = ( ζ = z ) x + z_1 ≥ 0 + ∑_w ∈ x + w < 0 ( ζ = (w , z_2) ) x + z_1 = 0 ;
see the right-hand panel in Figure <ref>.
We have thus verified all the hypotheses of Theorem <ref>.
Moreover, Lemma <ref> shows that, in both cases,
the reflection angles are orthogonal, i.e.,
_1 is in direction e_2 and _2 is in direction e_1. Thus
(see Example <ref>) it holds that φ_1 = φ_2 = φ_0 - π/2,
and χ = 2 - π/arccos (-ρ). Hence χ >0 is equivalent to ρ >0, while χ <0 is equivalent to χ <0: thus Theorem <ref><ref> establishes the recurrence/transience classification given in Theorem <ref>. Finally, if ρ >0 (hence χ >0) then Theorem <ref><ref>–<ref>
yields (<ref>), following the formulation in Remark <ref><ref>.
§ FOSTER–LYAPUNOV CONDITIONS
In this section, Z = (Z_n, n ∈) will denote a Markov chain on a state space X,
and passage times to sets A ⊆X will be denoted by τ_Z(A) as at (<ref>).
The following Foster–Lyapunov conditions for recurrence/transience
of countable Markov chains are standard, and can be found in e.g. Theorems 2.5.2 and 2.5.8 of <cit.>.
Let (Z_n,n ∈) be an irreducible Markov chain on a countable state space X, and suppose that there exist a function
f: X→ and a
set Λ⊂X such that
[ f(Z_n+1)- f(Z_n) | Z_n = z ] ≤ 0, for all z ∈X∖Λ .
Then the following apply.
*
If Λ is finite and, for every r ∈, {z ∈X : f(z) ≤ r} is finite, then Z is recurrent.
*
If Λ≠∅ and there exists z ∈X∖Λ with f(z) < inf_w ∈Λ f(w), then Z is transient.
Theorem <ref> gives upper bounds
on tails of passage times, and can be seen as an extension of Foster's criterion for integrability
of passage times; it is contained in a result of <cit.>, whose
origins go back to Lamperti <cit.> and Aspandiiarov et al. <cit.>.
Let (Z_n,n ∈) be a Markov chain on X.
Suppose that there exist
f: X→, constants η < 1, c >0, and a
set Λ⊂X such that
[ f(Z_n+1)- f(Z_n) | Z_n = z ] ≤ - c f (z)^η, for all z
∈X∖Λ .
Then for any γ∈ [0,1/1-η), [ τ_Z(Λ)^γ ] < ∞.
Apply Corollary 2.7.3 of <cit.> to the process X_n = f(Z_n).
Lower tail bounds are more delicate.
A general approach to lower bounds of passage times is developed in <cit.>, and its origins can be traced in one form or another back to Doob and, primarily, Lamperti <cit.>.
Theorem <ref> below requires a Lyapunov function f for which conditions (<ref>)–(<ref>) hold, which can
can be understood as requiring that f(Z) can return from a distant level to near the origin
no faster than diffusively.
If this is so, then a lower bound on the probability that an excursion of f(Z) reaches level n^1/2 (say)
should provide a lower bound on the probability that the duration of the excursion is of order n.
Theorem <ref> makes this precise.
In applications of Theorem <ref>, one also needs
a lower bound on the probability of reaching a distant level during an excursion,
and this is often provided by an optional stopping argument: for example, if f^α (Z)
is close to a martingale, then one would obtain a tail bound of the form ( τ_Z(Λ) ≥ n ) ≳ n^-α/2, roughly speaking (in our setting, <ref> gives such a bound).
Let (Z_n,n ∈) be a Markov chain on X.
Suppose that there exist
f: X→ (0,∞), constants B ∈, c ∈, and a
set Λ⊂X such that
[ ( f(Z_n+1)- f(Z_n) )^2 | Z_n = z ] ≤ B, for all z
∈X∖Λ ;
[ f(Z_n+1)- f(Z_n) | Z_n = z ] ≥ - c/f(z), for all z
∈X∖Λ .
Then there exists C ∈ (0,∞) such that
( τ_Z(Λ) ≥ n ) ≥1/2( max_0 ≤ m ≤τ_Z(Λ) f (Z_m) ≥ C n^1/2) .
Apply Lemma 2.7.7 of <cit.> to the process X_n = f(Z_n).
§ GEOMETRICAL VISION
In this section we augment the outline of our proof strategy described in <ref>
with some additional
geometrical intuition behind the parameters β and β_1 in the specification via (<ref>)–(<ref>) of our Lyapunov functions.
The geometrical point of view of the Foster–Lyapunov approach
for studying stability of multidimensional Markov processes was already exhibited by Kingman <cit.>.
Here we elaborate the use of the family of harmonic functions from <ref>.
Roughly speaking, to classify Z on ^2 as recurrent or transient via Theorem <ref>, one must exhibit a Lyapunov function f : ^2 → such that, on average, f(Z) is a non-increasing with respect to the evolution of the process away from a bounded set. The corresponding level sets Γ_f(a) := { z ∈^2 f(z) ≤ a}, as a decreases, then indicate the long time behaviour of the process;
if level sets are bounded, we get recurrence.
In our context, as described in <ref>,
we consider Lyapunov functions based on f(z) = h(T_Σ(z)) where h is the harmonic function given by (<ref>), and is a linear transformation of the quadrant .
For the purposes of intuition, we prefer to visualize not f acting on Z in the quadrant,
but h acting on = (Z) in the wedge . Then, as explained in <ref>,
h() will be almost a martingale in the interior of the wedge, and so h^α (), α≈ 1,
provides a good candidate for a super/submartingale in the interior.
Moreover, as discussed in <ref>, to obtain tail bounds on passage times
it is important to have not only an approximate martingale, but also to identify a scale on which the process is approximately diffusive, and it turns out that here one takes h^1/β, and hence β enters into the estimates for tails of passage times.
So far, β_1 , β_2 are constrained by the requirement that h is positive on , but there is still considerable freedom.
The (crucial) constraints on the choice of β are provided by ensuring the almost-martingale
property extends to the boundaries, and here is where the reflection angles enter.
The freedom in β_1, β_2 allows us to arrange the level curves of h
to cross the boundaries of the wedge at specified angles
(cutting the wedge into two components, one bounded and one unbounded), and these angles we aim to match with the reflection angles.
Recall that φ_0 is the angular width of the wedge ,
and that φ_0 is acute if ρ < 0 and obtuse if ρ >0 (cf. Figure <ref>).
Figures <ref> (for acute φ_0) and <ref> (obtuse φ_0) depict level curves for h defined in (<ref>).
The angles at which the level curves of h intersect the boundaries of the wedge are determined
by the gradient ∇ h; see (<ref>) and Figure <ref>.
Once β_1, β_2 are chosen to match the reflection angles at the boundaries, we have that h()
is almost a martingale everywhere, and hence a small modification f = h^α, α <1, provides a supermartingale outside a bounded set.
Recurrence/transience is then determined by whether the level sets Γ_f(a) are bounded or unbounded sets;
neglecting critical parameter cases, it is enough to look at Γ_h(a). If β_1 + β_2 >0 the level sets Γ_h(a) are bounded (yielding transience), and if β_1 + β_2 <0 the level sets Γ_h(a) are unbounded (yielding recurrence).
§ ACKNOWLEDGEMENTS
tocsectionAcknowledgements
This work was supported by EPSRC grant EP/W00657X/1.
tocsectionReferences
plain
|
http://arxiv.org/abs/2307.05243v1 | 20230709184055 | Inconsistency with De Sitter Spacetime of "Gravitational Pair Production and Black Hole Evaporation" | [
"Mark P. Hertzberg",
"Abraham Loeb"
] | gr-qc | [
"gr-qc",
"astro-ph.HE",
"hep-ph",
"hep-th"
] | |
http://arxiv.org/abs/2307.06206v1 | 20230712145221 | SepVAE: a contrastive VAE to separate pathological patterns from healthy ones | [
"Robin Louiset",
"Edouard Duchesnay",
"Antoine Grigis",
"Benoit Dufumier",
"Pietro Gori"
] | cs.CV | [
"cs.CV",
"stat.ML"
] |
[
SepVAE: a contrastive VAE to separate pathological patterns from healthy ones
equal*
Robin Louisetcea,tel
Edouard Duchesnaycea
Antoine Grigiscea
Benoit Dufumiercea,tel
Pietro Goritel
Robin [email protected]
ceaNeurospin, Joliot Institute, CEA, University Paris-Saclay, Gif-Sur-Yvette, 91190, France
telLTCI, Télécom Paris, Institut Polytechnique de Paris, Palaiseau, 91120, France
Machine Learning, Variational Auto-Encoder, Neuro-psychiatric, Biomedical imaging
0.3in
]
Contrastive Analysis VAE (CA-VAEs) is a family of Variational auto-encoders (VAEs) that aims at separating the common factors of variation between a background dataset (BG) (i.e., healthy subjects) and a target dataset (TG) (i.e., patients) from the ones that only exist in the target dataset.
To do so, these methods separate the latent space into a set of salient features (i.e., proper to the target dataset) and a set of common features (i.e., exist in both datasets). Currently, all models fail to prevent the sharing of information between latent spaces effectively and to capture all salient factors of variation.
To this end, we introduce two crucial regularization losses: a disentangling term between common and salient representations and a classification term between background and target samples in the salient space. We show a better performance than previous CA-VAEs methods on three medical applications and a natural images dataset (CelebA). Code and datasets are available on GitHub <https://github.com/neurospin-projects/2023_rlouiset_sepvae>.
§ INTRODUCTION
One of the goals of unsupervised learning is to learn a compact, latent representation of a dataset, capturing the underlying factors of variation. Furthermore, the estimated latent dimensions should describe distinct, noticeable, and semantically meaningful variations. One way to achieve that is to use a generative model, like Variational Auto-Encoders (VAEs) <cit.>, <cit.> and disentangling methods <cit.>, <cit.>, <cit.>,<cit.>, <cit.>, <cit.>, <cit.>. Differently from these methods, which use a single dataset, in Contrastive Analysis (CA), researchers attempt to distinguish the latent factors that generate a target (TG) and a background (BG) dataset. Usually, it is assumed that target samples comprise additional (or modified) patterns with respect to background data. The goal is thus to estimate the common generative factors and the ones that are target-specific (or salient).
This means that background data are fully encoded by some generative factors that are also common with the target data. On the other hand, target samples are assumed to be partly generated from strictly proper factors of variability, which we entitle target-specific or salient factors of variability. This formulation is particularly useful in medical applications where clinicians are interested in separating common (i.e., healthy) patterns from the salient (i.e., pathological) ones in an intepretable way.
For instance, consider two sets of data: 1) healthy neuro-anatomical MRIs (BG=background dataset) and 2) Alzheimer-affected patients' MRIs (TG=target dataset).
As in <cit.>, <cit.>, given these two datasets, neuroscientists would be interested in distinguishing common factors of variations (e.g.: effects of aging, education or gender) from Alzheimer's specific markers (e.g.: temporal lobe atrophy, an increase of beta-amyloid plaques). Until recently, separating the various latent mechanisms that drive neuro-anatomical variability in neuro-degenerative disorders was considered hardly feasible. This can be attributed to the intertwining between the variability due to natural aging and the variability due to neurodegenerative disease development. The combined effects of both processes make hardly interpretable the potential discovery of novel bio-markers.
The objective of developing such a Contrastive Analysis method would be to help separate these processes. And thus identifying correlations between neuro-biological markers and pathological symptoms. In the common features space, aging patterns should correlate with normal cognitive decline, while salient features (i.e.: Alzheimer-specific patterns) should correlate with pathological cognitive decline.
Besides medical imaging, Contrastive Analysis (CA) methods cover various kinds of applications, like in pharmacology (
placebo versus medicated populations), biology (pre-intervention vs. post-intervention cohorts) <cit.>, and genetics (healthy vs. disease population <cit.>, <cit.>).
§ RELATED WORKS
Variational Auto-Encoders (VAEs) <cit.> have advanced the field of unsupervised learning by generating new samples and capturing the underlying structure of the data onto a lower-dimensional data manifold. Compared to linear methods (e.g., PCA, ICA), VAEs make use of deep non-linear encoders to capture non-linear relationships in the data, leading to better performance on a variety of tasks.
Disentangling methods <cit.> enable learning the underlying factors of variation in the data. While disentangling <cit.> is a desirable property for improving the control of the image generation process and the interpretation of the latent space
<cit.>, these methods are usually based on a single dataset, and they do not explicitly use labels or multiple datasets to effectively estimate and separate the common and salient factors of variation.
Semi and weakly-supervised VAEs <cit.> have proposed to integrate class labels in their training. However, these methods solely allow conditional generalization and better semantic expressivity rather than addressing the separation of the factors of variation between distinct datasets.
Contrastive Analysis (CA) works are explicitly designed to identify patterns that are unique to a target dataset compared to a background dataset. First attempts <cit.> employed linear methods in order to identify a projection that captures the variance of the target dataset while minimizing the background information expressivity. However, due to their linearity, these methods had reduced learning expressivity and were also unable to produce satisfactory generation.
Contrastive VAE <cit.> have employed deep encoders in order to capture higher-level semantics. They usually rely on a latent space split into two parts, a common and a salient, produced by two different encoders. First methods, such as <cit.>, employed two decoders (common and salient) and directly sum the common and salient reconstructions in the input space. This seems to be a very strong assumption, probably wrong when working with high-dimensional and complex images.
For this reason, subsequent works used a single decoder, which takes as input the concatenation of both latent spaces. Importantly, when seeking to reconstruct background inputs, the decoder is fed with the concatenation of the common part and an informationless reference vector s'. This is usually chosen to be a null vector in order to reconstruct a null (i.e., empty) image by setting the decoder's biases to 0. Furthermore, to fully enforce the constraints and assumptions of the underlying CA generative model, previous methods have proposed different regularizations. Here, we analyze the most important ones with their advantages and shortcomings:
Minimizing background's variance in the salient space
Pioneer works <cit.> have shown inconsistency between the encoding and the decoding task. While background samples are reconstructed from s', the salient encoder does not encourage the background salient latents to be equal to s'. To fix
this inconsistency, posterior works <cit.> have shown that explicitly nullifying the background variance in the salient space was beneficial. This regularization is
necessary to avoid salient features explaining the background variability but not
sufficient to prevent information leakage between common and salient spaces, as shown in <cit.>.
Independence between common and salient spaces
Only one work <cit.> proposed to prevent information leakage between the common and salient space by minimizing the total correlation (TC) between q_ϕ_c,ϕ_s(c, s | x) and q_ϕ_c(c|x) × q_ϕ_s(s|x), in the same fashion as in FactorVAE <cit.>. This requires to independently train a discriminator D_λ(.) that aims at approximating the ratio between the joint distribution q(x) = q_ϕ_c,ϕ_s(c, s| x) and the marginal of the posteriors q̅(x) = q_ϕ_c(c| x) × q_ϕ_s(s| x) via the density-ratio trick <cit.>.
In practice, <cit.>'s code does not use an independent optimizer for λ, which undermines the original contribution. Moreover, when incorrectly estimated, the TC can become negative, and its minimization can be harmful to the model's training.
Matching background and target common patterns
Another work <cit.>, has proposed to encourage the distribution in the common space to be the same across target samples and background samples. Mathematically, it is equivalent to minimizing the KL between q_ϕ_c(c | y=0) and q_ϕ_c(c | y=1) (or between q_ϕ_c(c) and q_ϕ_c(c | y)).
In practice, we argue that it may encourage undesirable biases to be captured by salient factors rather than common factors. For example, let's suppose that we have healthy subjects (background dataset) and patients (target dataset) and that patients are composed of both young and old individuals, whereas healthy subjects are only old. We would expect the CA method to capture the normal aging patterns (i.e.: the bias) in the common space. However, forcing both q_ϕ_c(c | x, y=0) and q_ϕ_c(c | x, y=1) to follow the same distribution in the common space would probably bring to a biased distribution and thus to leakage of information between salient and common factors (i.e., aging could be considered as a salient factor of the patient dataset).
This behavior is not desirable, and we believe that the statistical independence between common and salient space is a more robust property.
Contributions
Our contributions are three-fold:
∙ We develop a new Contrastive Analysis method: SepVAE, which is supported by a sound and versatile Evidence Lower BOund maximization framework.
∙ We identify and implement two properties: the salient space discriminability and the salient/common independence, that have not been successfully addressed by previous Contrastive VAE methods.
∙ We provide a fair comparison with other SOTA CA-VAE methods on 3 medical applications and a natural image experiment.
§ CONTRASTIVE VARIATIONAL AUTOENCODERS
Let (X,Y)={(x_i,y_i) }_i=1^N be a data-set of images x_i associated with labels y_i ∈{0, 1}, 0 for background and 1 for target.
Both background and target samples are assumed to be i.i.d. from two different and unknown distributions that depend on two latent variables: c_i ∈𝐑^D_c and s_i ∈𝐑^D_s.
Our objective is to have a generative model x_i ∼ p_θ (x |y_i, c_i, s_i) so that: 1- the common latent vectors C = {c_i}_i=1^N should capture the common generative factors of variation between the background and target distributions and fully encode the background samples and 2- the salient latent vectors S = {s_i}_i=1^N should capture the distinct generative factors of variation of the target set (i.e., patterns that are only present in the target dataset and not in the background dataset).
Similar to previous works<cit.>, we assume the generative process: p_θ(x, y, c, s) = p_θ(x | c, s, y)p_θ(c) p_θ(s | y) p(y).
Since p_θ(c, s| x, y) is hard to compute in practice, we approximate it using an auxiliary parametric distribution q_ϕ(c, s | x, y) and directly derive the Evidence Lower Bound of log p(x, y).
Based on this generative latent variable model, one can derive the ELBO of the marginal log-likelihood log p(x, y),
- log p_θ(x, y) ≤𝐄_c, s ∼ q_ϕ_c, ϕ_s(c, s|x, y)logq_ϕ_c, ϕ_s(c, s|x, y)/p_θ(x, y, c, s)
where we have introduced an auxiliary parametric distribution q_ϕ(c, s | x, y) to approximate p_θ(c, s| x, y).
From there, we can develop the lower bound into three terms, a conditional reconstruction term, a common space prior regularization, and a salient space prior regularization:
- log p_θ(x, y) ≤ - 𝐄_c, s ∼ q_ϕ_c, ϕ_s(c, s|x, y)log p_θ(x | y, c, s)_Conditional Reconstruction
+ KL(q_ϕ_c(c|x) || p_θ(c))_b) Common prior + KL(q_ϕ_s(s|x, y) || p_θ(s|y))_c) Salient prior
Here, we assume the independence of the auxiliary distributions (i.e.: q_ϕ_c, ϕ_s(c, s|x,y) = q_ϕ_c(c|x) q_ϕ_s(s|x,y)) and prior distributions (i.e.: p_θ(c, s)=p_θ(c)p_θ(s)). Both p_θ (x |y_i, c_i, s_i) (i.e., single decoder) and q_ϕ_c(c|x) q_ϕ_s(s|x,y) (i.e., two encoders) are assumed to follow a Gaussian distribution parametrized by a neural network. To reinforce the independence assumption between c and s, we introduce a Mutual Information regularization term KL(q(c, s)||q(c) q(s)). Theoretically, this term is similar to the one in <cit.>. This property is desirable in order to ensure that the information is well separated between the latent spaces. However, in <cit.>, the Mutual Information estimation and minimization are done simultaneously [In
<cit.>, Algorithm 1 suggests that the Mutual Information estimation and minimization depend on two distinct parameters update. However, in practice, in their code, a single optimizer is used. This is also confirmed in Sec. 3, where authors write: "discriminator is trained simultaneously with the encoder and decoder neural networks".]. In this paper, we argue that the estimation of the Mutual Information requires the introduction of an independent optimizer, see Sec. <ref>.
To further reduce the overlap of target and common distributions on the salient space, we also introduce a salient classification loss defined as 𝐄_s ∼ q_ϕ_s(s|x, y)log p(y | s).
By combining all these losses together, we obtain the final loss ℒ:
ℒ = - 𝐄_c, s ∼ q_ϕ_c, ϕ_s(c, s|x,y)log p_θ(x | c, s, y)_a) Conditional Reconstruction
+ KL(q(c, s)||q(c) q(s))_e) Mutual Information - 𝐄_s ∼ q_ϕ_s(s|x,y)log p_θ(y | s)_d) Salient Classification
+ KL(q_ϕ_c(c|x)||p_θ(c))_b) Common Prior + KL(q_ϕ_s(s|x, y)||p_θ(s|y))_c) Salient Prior
§.§ Conditional reconstruction
The reconstruction loss term is given by - 𝐄_c,s ∼ q_ϕ_c, ϕ_s(c, s | x, y)log p_θ(x | c, s, y). Given an image x (and a label y), a common and a salient latent vector can be drawn from q_ϕ_c, ϕ_s with the help of the reparameterization trick.
We assume that p(x | c, s, y) ∼𝒩(d_θ([c, ys+(1-y)s'], I), i.e: p_θ(x | c, s, y) follows a Gaussian distribution parameterized by θ, centered on μ_x̂ = d_θ([c, ys+(1-y)s']) with identity covariance matrix, and d_θ is the decoder and [.,.] denotes a concatenation.
Therefore, by developing the reconstruction loss term, we obtain the mean squared error between the input and the reconstruction: ℒ_rec = ∑_i=1^N || x - d_θ([c, ys+(1-y)s'])||^2_2. Importantly, for background samples, we set the salient latent vectors to s'=0. This choice enables isolating the background factors of variability in the common space only.
§.§ Common prior
By assuming p(c) ∼𝒩(0,I) and q_ϕ_c(c | x) ∼𝒩(μ_ϕ(x), σ_ϕ(x, y)), the KL loss has a closed form solution, as in standard VAEs. Here, both μ_ϕ(x) and σ_ϕ(x, y) are the outputs of the encoder e_ϕ_c.
§.§ Salient prior
To compute this regularization, we first need to develop p_θ(s) = ∑_y p(y) p_θ(s|y), where we assume that p(y) follows a Bernoulli distribution with probability equal to 0.5. Thus, the salient prior reduces to a formula that only depends on p_θ(s|y), which is conditioned by the knowledge of the label (0: background, 1: target). This allows us to
distinguish between the salient priors of background samples (p(s | y=0)) and target samples (p(s | y=1)).
Similar to other CA-VAE methods, we assume that p(s | y=1) ∼𝒩(0,I) and
, as in <cit.>, that p(s | x,y=0) ∼𝒩(s',√(σ_p) I), with s'=0 and √(σ_p)<1, namely a Gaussian distribution centered on an informationless reference s' with a small constant variance σ_p.
We preferred it to a Delta function δ(s=s') (as in <cit.>) because it eases the computation of the KL divergence (i.e., closed form) and
it also means that we tolerate a small salient variation in the background (healthy) samples. In real applications, in particular medical ones, diagnosis labels can be noisy, and mild pathological patterns may exist in some healthy control subjects. Using such a prior, we tolerate these possible (erroneous) sources of variation.
Furthermore, one could also extend the proposed method to a continuous y, for instance, between 0 and 1, describing the severity of the disease. Indeed, practitioners could define a function σ_p(y) that would map the severity score y to a salient prior standard deviation (e.g., σ_p(y) = y). In this way, we could extend our framework to the case where pathological variations would follow a continuum from no (or mild) to severe patterns.
§.§ Salient classification
In the salient prior regularization, as in previous works, we encourage background and target salient factors to match two different Gaussian distributions, both centered in 0 (we assume s'=0) but with different covariance.
However, we argue that target salient factors should be further encouraged to differ from the background ones in order to reduce the overlap of target and common distributions on the salient space and enhance the expressivity of the salient space.
To encourage target and background salient factors to be generated from different distributions,
we propose to minimize a Binary Cross Entropy loss to distinguish the target from background samples in the salient space.
Assuming that p(y | s) follows a Bernoulli distribution parameterized by f_ξ(s), a 2-layers classification Neural Network, we obtain a Binary Cross Entropy (BCE) loss between true labels y and predicted labels ŷ = f_ξ(s).
§.§ Mutual Information
To promote independence between c and s, we minimize their mutual information, defined as the KL divergence between the joint distribution q(c, s) and the product of their marginals q(c) q(s).
However, computing this quantity is not trivial, and it requires a few tricks in order to correctly estimate and minimize it. As in <cit.>, it is possible to take inspiration from FactorVAE <cit.>, which proposes to estimate the density-ratio between a joint distribution and the product of the marginals. In our case, we seek to enforce the independence between two sets of latent variables rather than between each latent variable of a set. The density-ratio trick <cit.> allows us to estimate the quantity inside the log in Eq.<ref>. First, we sample from q(c, s) by randomly choosing a batch of images (x_i, y_i) and drawing their latent factors [c_i, s_i] from the encoders e_ϕ_c and e_ϕ_s. Then, we sample from q(c) q(s) by using the same batch of images where we shuffle the latent codes among images (e.g., [c_1, s_2], [c_2, s_3], etc.). Once we obtained samples from both distributions, we trained an independent classifier D_λ([c, s]) to discriminate the samples drawn from the two distributions by minimizing a BCE loss. The classifier is then used to approximate the ratio in the KL divergence, and we can train the encoders e_ϕ_c and e_ϕ_s to minimize the resulting loss:
ℒ_ℳℐ = 𝔼_q(c, s)log( q(c, s)/q(c) q(s))
≈∑_i ReLU( log(D_λ([c_i, s_i])/1 - D_λ([c_i, s_i])) )
where the ReLU function forces the estimate of the KL divergence to be positive, thus avoiding to back-propagate wrong estimates of the density ratio due to the simultaneous training of D_λ([c, s]).
In <cit.>, while Alg.1 of the original paper describes two distinct gradient updates, it is written that "This discriminator is trained simultaneously with the encoder and decoder neural networks". In practice, a single optimizer is used in their training code. In our work, we use an independent optimizer for D_λ, in order to ensure that the density ratio is well estimated. Furthermore, we freeze D_λ's parameters when minimizing the Mutual Information estimate. The pseudo-code is available in Alg. <ref>, and a visual explanation is shown in Fig.<ref>.
§ EXPERIMENTS
§.§ Evaluation details
Here, we evaluate the ability of SepVAE to separate common from target-specific patterns on three medical and one natural (CelebA) imaging datasets. We compare it with the only SOTA CA-VAE methods whose code is available: MM-cVAE <cit.> and ConVAE [ConVAE implemented with correct Mutual Information minimization, i.e.: with independently trained discriminator.] <cit.>.
For quantitative evaluation, we use the fact that the information about attributes, clinical variables, or subtypes (e.g. glasses/hats in CelebA) should be present either in the common or in the salient space. Once the encoders/decoder are trained, we evaluate the quality of the representations in two steps. First, we train a Logistic (resp. Linear) Regression on the estimated salient and common factors of the training set to predict the attribute presence (resp. attribute value). Then, we evaluate the classification/regression model on the salient and common factors estimated from a test set. By evaluating the performance of the model, we can understand whether the information about the attributes/variables/subtype has been put in the common or salient latent space by the method. Furthermore, we report the background (BG) vs target (TG) classification accuracy. To do so, a 2 layers MLPs is independently trained, except for SepVAE, where salient space predictions are directly estimated by the classifier.
In all Tables, for categorical variables, we compute (Balanced) Accuracy scores (=(B-)ACC), or Area-under Curve scores (=AUC) if the target is binary. For continuous variables, we use Mean Average Error (=MAE). Best results are highlighted in bold, second best results are underlined. For CelebA and Pneumonia experiments, mean, and standard deviations are computed on the results of 5 different runs in order to account for model initializations. For neuro-psychiatric experiments, mean and standard deviations are computed using a 5-fold cross-validation evaluation scheme.
Qualitatively, the model can be evaluated by looking at the full image reconstruction (common+salient factors) and by fixing the salient factors to s' for target images. Comparing full reconstructions with common-only reconstructions allows the user to interpret the patterns encoded in the salient factors s (see Fig.<ref> and Fig.<ref>).
§.§ CelebA - glasses vs hat identification
To compare with <cit.>, we evaluated our performances on the CelebA with attributes dataset. It contains two sets, target and background, from a subset of CelebA <cit.>, one with images of celebrities wearing glasses or hats (target) and the other with images of celebrities not wearing these accessories (background). The discriminative information allowing the classification of glasses vs. hats should only be present in the salient latent space. We demonstrate that we successfully encode these attributes in the salient space with quantitative results in Tab. <ref>, and with reconstruction results in Fig. <ref>. Furthermore, in Fig. <ref>, we show that we effectively minimize the background dataset variance in the salient space compared to MM-cVAE[Our evaluation process is different from <cit.> as their TEST set has been used during the model training. Indeed, the TRAIN / TEST split used for training Logistic Regression is performed after the model fitting on the set TRAIN+TEST set. Besides, we were not able to reproduce their results.].
§.§ Identify pneumonia subgroups
We gathered 1342 healthy X-Ray radiographies (background dataset), 2684 radiographies of pneumonia radiographies (target dataset) from <cit.>. Two different sub-types of pneumonia constitute this set, viral (1342 samples) and bacterial (1342 samples), see Fig.<ref>. Radiographies were selected from a cohort of pediatric patients aged between one and five years old from Guangzhou Women and Children’s Medical Center, Guangzhou. TRAIN set images were graded by 2 radiologists experts and the independent TEST set was graded by a third expert to account for label uncertainty. In Tab. <ref>, we demonstrate that our method is able to produce a salient space that captures the pathological variability as it allows distinguishing the two subtypes: viral and bacterial pneumonia.
Ablation study In the lower part of Tab. <ref>, we propose to disable different components of the model to show that the full model SepVAE is always better on average. no MI means that we disabled the Mutual Information minimization loss (no Mutual Information Minimization). no CLSF means that we disabled the classification loss on the salient space (no Salient Classification). no REG means that we disabled the regularization loss that forces the background samples to align with an informationless vector s' = 0 (no Salient Prior).
§.§ Parsing neuro-anatomical variability in psychiatric diseases
The task of identifying consistent correlations between neuro-anatomical biomarkers and observed symptoms in psychiatric diseases is important for developing more precise treatment options. Separating the different latent mechanisms that drive neuro-anatomical variability in psychiatric disorders is a challenging task. Contrastive Analysis (CA) methods such as ours have the potential to identify and separate healthy from pathological neuro-anatomical patterns in structural MRIs. This ability could be a key component to push forward the understanding of the mechanisms that underlie the development of psychiatric diseases.
Given a background population of Healthy Controls (HC) and a target population suffering from a Mental Disorder (MD), the objective is to capture the pathological factors of variability in the salient space, such as psychiatric and cognitive clinical scores, while isolating the patterns related to demographic variables, such as age and sex, or acquisition sites to the common space. For each experiment, we gather T1w anatomical VBM <cit.> pre-processed images resized to 128x128x128 of HC and MD subjects. We divide them into 5 TRAIN, VAL splits (0.75, 0.25) and evaluate in a cross-validation scheme the performance of SepVAE and the other SOTA CA-VAE methods. Please note that this is a challenging problem, especially due to the high dimensionality of the input and the scarcity of the data. Notably, the measures of psychiatric and cognitive clinical scores are only available for some patients, making it scarce and precious information.
§.§.§ Schizophrenia:
We merged images of schizophrenic patients (TG) and healthy controls (BG) from the datasets SCHIZCONNECT-VIP <cit.> and BSNIP <cit.>. Results in Tab. <ref> show that the salient factors estimated using our method better predict schizophrenia-specific variables of interest: SAPS (Scale of Positive Symptoms), SANS (Scale of Negative Symptoms), and diagnosis. On the other hand, salient features are shown to be poorly predictive of demographic variables: age, sex, and acquisition site. It paves the way toward a better understanding of schizophrenia disorder by capturing neuro-anatomical patterns that are predictive of the psychiatric scales while not being biased by confound variables.
§.§.§ Autism:
Second, we
combine patients with autism from ABIDE1 and ABIDE2 <cit.> (TG) with healthy controls (BG). In Tab. <ref>, SepVAE's salient latents better predict the diagnosis and the clinical variables, such as ADOS (Autism Diagnosis Observation Schedule) and ADI Social (Autism Diagnosis Interview Social) which quantifies the social interaction abilities. On the other hand, salient latents poorly infer irrelevant demographic variables (age, sex, and acquisition site), which is a desirable feature for the development of unbiased diagnosis tools.
§ CONCLUSIONS AND PERSPECTIVES
In this paper, we developed a novel CA-VAE method entitled SepVAE.
Building onto Contrastive Analysis methods, we first criticize previously proposed regularizations about (1) the matching of target and background distributions in the common space and (2) the overlapping of target and background priors in the salient space. These regularizations may fail to prevent information leakage between common and salient spaces, especially when datasets are biased. We thus propose two alternative solutions: salient discrimination between target and background samples, and mutual information minimization between common and salient spaces.
We integrate these losses along with the maximization of the ELBO of the joint log-likelihood. We demonstrate superior performances on radiological and two neuro-psychiatric applications, where we successfully separate the pathological information of interest (diagnosis, pathological scores) from the “nuisance" common variations (e.g., age, site). The development of methods like ours seems very promising and offers a large spectrum of perspectives. For example, it could be further extended to multiple target datasets (e.g., healthy population Vs several pathologies, to obtain a continuum healthy - mild - severe pathology) and to other models, such as GANs, for improved generation quality. Eventually, to be entirely trustworthy, the model must be identifiable, namely, we need to know the conditions that allow us to learn the correct joint distribution over observed and latent variables. We plan to follow <cit.> to obtain theoretic guarantees of identifiability of our model.
icml2023
Supplementary
§ CONTEXT ON VARIATIONAL AUTO-ENCODERS
Variational Autoencoders (VAEs) are a type of generative model that can be used to learn a compact, continuous latent representation of a dataset. They are based on the idea of using an encoder network to map input data points x (e.g: an image) to a latent space z, and a decoder network to map points in the latent space back to the original data space.
Mathematically, given a dataset X = x_i_i=1^N and a VAE model with encoder q_ϕ(z|x) and decoder p_θ(x|z), the VAE seeks ϕ, θ to maximize a lower bound of the input distribution likelihood:
log p_θ(x) ≤𝐄_z ∼ q_ϕ(z|x)log p_θ(x|z) - KL(q_ϕ(z|x) || p_θ(z))
where p_θ(x|z) is the likelihood of the input space, and KL(q_ϕ(z|x) || p(z)) is the Kullback-Leibler divergence between q_ϕ(z|x), the approximation of the posterior distribution, and p(z) the prior over the latent space (often chosen to be a standard normal distribution).
The first term in the objective function, 𝐄_z ∼ q_ϕ(z|x)log p_θ(x|z), is the negative reconstruction error, which measures how well the decoder can reconstruct the input data from the latent representation. The second term, KL(q_ϕ(z|x) || p(z)), encourages the encoder distribution to be similar to the prior distribution, which helps to prevent overfitting and encourage the learned latent representation to be continuous and smooth.
§ SALIENT POSTERIOR SAMPLING FOR BACKGROUND SAMPLES
In Sec. 3.3, we motivated the choice of a peaked Gaussian prior for salient background distribution with a user-defined σ_p. This way, the derivation of the Kullback-Leiber divergence is directly analytically tractable as in standard VAEs.
To simplify the optimization scheme, we could also set and freeze the standard deviations σ_q^y=0 of the salient space of the background samples. This way, it reduces the Kullback-Leiber divergence between q_ϕ(s | x, y=0) and p_θ(s | x, y=0) to a 1/σ_p-weighted Mean Squared Error between μ_s(x|y=0) and s' : ||μ_s^x_i|y=0 - s'||_2^2/σ_p. In our code, we make this choice as it simplifies the training scheme (σ_q^y=0 does not need to be estimated). In the case where there exists a continuum between healthy and diseased populations, σ_q^y=0 should be estimated.
Also, the choice of a frozen σ_q^y=0 allows controlling the radius of the classification boundary between background and target samples in the salient space. Indeed, the classifier is fed with samples from the target distributions (q_ϕ_s(s|x,y=1)∼ N(μ_s(x), σ_s(x))), and background distributions (q_ϕ_s(s|x,y=0)∼ N(μ_s(x|y=0), σ_q). This implicitly avoids the overlap of both distributions with a margin proportional to σ_q. See Fig. <ref> for a visual explanation.
§ IMPLEMENTATION DETAILS
§.§ CelebA glasses and hat versus no accessories
We used a train set of 20000 images, (10000 no accessories, 5000 glasses, 5000 hats) and an independent test set of 4000 images (2000 no accessories, 1000 glasses, 1000 hats), and ran the experiment 5 times to account for initialization uncertainty. Images are of size 64 × 64, pixel were normalized between 0 and 1.
For this experiment, we use a standard encoder architecture composed of 5 convolutions (channels 3, 32, 32, 64, 128, 256), kernel size 4, stride 2, and padding (1, 1, 1, 1, 1). Then, for each mean and standard deviations predicted (common and salient) we used two linear layers going from 256 to hidden size 32 to (common and salient) latent space size 16. The decoder was set in a symmetrical manner. We used the same architecture across all the concurrent works we evaluated. We used a common and latent space dimension of 16 each. The learning rate was set to 0.001 with an Adam optimizer. Oddly we found that re-instantiating it at each epoch led to better results (for concurrent works also), we think that it is because it forgets momentum internal states between the epochs. The models were trained during 250 epochs. To note, MM-cVAE used latent spaces of 16 (salient space) and 6 common space and a different architecture but we noticed that it led to artifacts in the reconstruction (see original contribution). Also, we did not succeed to reproduce their performances with their code, their model, and their latent spaces, even with the same experimental setup. We, therefore, used our model setting which led to better performances across each method with batch size equal to 512. We used β_c=0.5 and β_s=0.5, κ=2, γ=1e-10, σ_p=0.025. For MM-cVAE we used the same learning rate, β_c=0.5 and β_s=0.5, the background salient regularization weight 100, common regularization weight of 1000.
§.§ Pneumonia
Train set images were graded by 2 radiologists experts and the independent test set was graded by a third expert, the experiment was run 5 times to account for initialization uncertainty. Images are of size 64 × 64, pixel were normalized between 0 and 1.
For this experiment, we use a standard encoder architecture composed of 4 convolutions (channels 3, 32, 32, 32, 256), kernel size 4, and padding (1, 1, 1, 0). Then, for each mean and standard deviations predicted (common and salient) we used two linear layers going from 256 to hidden size 256 to (common and salient) latent space size 128. The decoder was set in a symmetrical manner. We used the same architecture across all the concurrent works we evaluated. We used a common and latent space dimension of 128 each. The learning rate was set to 0.001 with an Adam optimizer. Oddly we found that re-instantiating it at each epoch led to better results (for concurrent works also), we think that it is because it forgets momentum internal states between the epochs. The models were trained during 100 epochs with batch size equal to 512. We used β_c=0.5 and β_s=0.1, κ=2, γ=5e-10, σ_p=0.05. For MM-cVAE, we used the same learning rate, β_c=0.5 and β_s=0.1, the background salient regularization weight 100, common regularization weight of 1000.
§.§ Neuro-psychiatric experiments
Images are of size 128 × 128 × 128 with voxels normalized on a Gaussian distribution per image. Experiments were run 3 times with a different train/val/test split to account for initialization and data uncertainty.
For this experiment, we use a standard encoder architecture composed of 3 3D-convolutions (channels 1, 32, 64, 128), kernel size 3, stride 2, and padding 1 followed by batch normalization layers. Then, for each mean and standard deviations predicted (common and salient), we used two linear layers going from 65536 to hidden size 256 to (common and salient) latent space size 128. The decoder was set symmetrically, except that it has four transposed convolutions (channels 128, 64, 32, 16, 1), kernel size 3, stride 2, and padding 1 followed by batch normalization layers. We used the same architecture across all the concurrent works we evaluated. We used a common and latent space dimension of 128 each. The models were trained during 51 epochs with a batch size equal to 32 with an Adam optimizer.
For the Schizophrenia experiment, for Sep VAE, we used a learning rate of 0.00005, β_c=1 and β_s=0.1, κ=10, γ=1e-8, α=1/0.01. For MM-cVAE we used the same learning rate, β_c=1 and β_s=0.1, the background salient regularization weight 100, common regularization weight of 1000.
For the Autism disorder experiment, we used a learning rate of 0.00002, β_c=1 and β_s=0.1, κ=10, γ=1e-8, σ_p=0.01. For MM-cVAE we used the same learning rate, β_c=1 and β_s=0.1, the background salient regularization weight 100, common regularization weight of 1000.
|
http://arxiv.org/abs/2307.04563v1 | 20230710135209 | Automatically detecting activities of daily living from in-home sensors as indicators of routine behaviour in an older population | [
"Claire M. Timon",
"Pamela Hussey",
"Hyowon Lee",
"Catriona Murphy",
"Harsh Vardan Rai",
"and Alan F. Smeaton"
] | cs.HC | [
"cs.HC",
"cs.LG"
] |
Timon et al.
Digital Health
1Centre for eIntegrated Care (CeIC), School of Nursing, Psychotherapy and Community Health, Dublin City University, Dublin 9, Ireland
2Insight Centre for Data Analytics, Dublin City University, Dublin 9, Ireland
Alan F. Smeaton,
Insight Centre for Data Analytics, Dublin City University, Glasnevin, Dublin 9, Ireland.
[email protected]
Objective
The NEX project has developed an integrated Internet of Things (IoT) system coupled with data analytics to offer unobtrusive health and wellness monitoring supporting older adults living independently at home. Monitoring currently involves visualising a set of automatically detected activities of daily living (ADLs) for each participant. The detection of ADLs is achieved to allow the incorporation of additional participants whose ADLs are detected without re-training the system.
Methods
Following an extensive User Needs and Requirements study involving 426 participants, a pilot trial and a friendly trial of the deployment, an Action Research Cycle (ARC) trial was completed. This involved 23 participants over a 10-week period each with c.20 IoT sensors in their homes. During the ARC trial, participants each took part in two data-informed briefings which presented visualisations of their own in-home activities. The briefings also gathered training data on the accuracy of detected activities. Association rule mining was then used on the combination of data from sensors and participant feedback to improve the automatic detection of ADLs.
Results
Association rule mining was used to detect a range of ADLs for each participant independently of others and was then used to detect ADLs across participants using a single set of rules for each ADL. This allows additional participants to be added without the necessity of them providing training data.
Conclusions
Additional participants can be added to the NEX system without the necessity to re-train the system for automatic detection of the set of their activities of daily living.
Automatically detecting activities of daily living from in-home sensors as indicators of routine behaviour in an older population
Claire M. Timon1, Pamela Hussey1, Hyowon Lee2, Catriona Murphy1, Harsh Vardan Rai2 and Alan F. Smeaton2
August 12, 2023
=================================================================================================================================
§ INTRODUCTION
Using IoT technologies, the use of ambient sensors to detect activities in the homes of older or more vulnerable people has grown in recent years <cit.>.
In its basic form, the use case for this has been to record and visualise the raw data from actual sensor triggers and activations and to present aggregated views of this data spanning days, weeks or even months. This allows a clinician, a caregiver or a family member to observe whether certain sensors have been triggered or not.
In turn it also allows an observer to use their observation of sensor activations to deduce whether or not higher level activities to do with eating, cleaning or social interaction with others, have taken place. For example if IoT sensors on the kettle and on the doors to the cupboards where cups, tea and sugar are stored are all activated within a short time frame during the morning, then the observer could infer that a mid-morning tea or coffee was made.
Visualising raw data from sensors can allow patterns of in-home behaviour to be observed but this is far more challenging because typically there are a large number of sensor activations that are not connected with the higher level activities which we may wish to observe as well as the general visual “noise” from visualising so much data. For example, just because a sensor on the entrance door to a home has been activated does not mean the occupant has left or arrived, the activation could have been caused by a caller to the home, or by a delivery. It is only by looking at combinations of sensor activations in occasions of ADL activities that the actual behaviour can be accurately determined. So if presence sensors in more than one part of the home are simultaneously activated after the entrance door sensor has been activated that implies there is a caller to the home.
While the approaches to gathering and visualising raw sensor activations are useful, their limitation is that they place the burden on the clinician or observer to interpret raw sensor activations into higher level activities which correspond to the things that people do in their everyday life by grouping combinations of sensor activations.
This can be seen in Figure <ref> from our deployed system showing one week of raw sensor data from a participant's home. A total of 16 sensors are deployed including motion sensors, 6-in-1 environmental sensors, smartplugs and contact sensors on doors and presses. While scanning this visualisation can reveal daily daytime and evening patterns of activities particularly in the kitchen and other rooms, it is difficult to get an overall view and especially to extend an overall view of activities into multiple weeks
Activities of Daily Living (ADLs) are a set of known, pre-defined and agreed daily physical or movement activities which most people will carry out and which correspond to the skills required to manage our basic physical needs <cit.>. Proposals for what make up a definitive set of ADLs have been around for many years <cit.> and some have been revised since those first proposals and specialised for areas
including activities for people with dementia and activities for stroke patients <cit.>.
Even with such subject specialisms, the set of ADLs commonly used today are fairly stable <cit.>.
ADLs are typically used to provide a summative assessment of whether a person is able to reach a certain level of movement and to competently complete basic tasks so self-manage their lives and typically this would be used in assessments of older citizens <cit.>.
ADLs are essential and routine tasks that most healthy individuals can perform without assistance <cit.>. The inability to accomplish essential activities of daily living may lead to unsafe conditions, poor quality of life and may be indicative of a physical or cognitive disability in older adults. Eligibility for home care is frequently associated with deficits in ADL ability <cit.>. Assessment of ADLs through self-reported, observed or objective data provides evidence to individuals and caregivers on existing baselines and potential deficits in self-care ability and supports potential interventions which may be required for continued independence.
The state of the art in the field of recognition of activities of daily living is already well developed as shown by systematic reviews published within the last decade including <cit.>. These works describe a field which has received much attention because it is an important topic and it has a very practical and useful nature.
In this paper we present a technique to automatically detect a subset of common ADLs from raw sensor data in the homes of older citizens living alone and to “tag” their routine behaviour. The sensors used in our study of ADL generation are not wearable sensors but are in-situ sensors in the home though participants did use a smartwatch which was not used in this study. The set of ADLs are chosen as indicators of routine everyday behaviour. The ability to infer and visualise higher level activities as well as viewing the raw sensor data means that caregivers and family, as well as participants themselves, can assess behaviour and behaviour changes over time in a more natural and intuitive way.
The technique we use for inferring ADLs uses association rule mining and relies on an initial set of manual annotations from participants but once this is in place we can incorporate additional participants without the necessity for further manual annotation.
While our approach to ADL detection is data-driven, other approaches to ADL detection have been taken including a knowledge-driven approach in <cit.> which uses domain knowledge, structured ontologies and semantic reasoning to disambiguate potential conflicts. The focus of the work in <cit.> is on real-time detection of ADLs as they happen, in an incremental way hence the use of semantic reasoning and ontologies to disambiguate. In the work in this paper the detection of ADLs happens retrospectively, at the end of each day because our use case does not require real-time detection.
The work which is possibly closest to what we report here is a series of works by researchers from INRIA in France <cit.>. Their work involved several older healthy participants, living normally in their homes and targeting a range of daily activities to detect while using sensor data to assist in the detection. That work
culminated in a method to detect 6 generic activity types including meal preparation, leaving the home, and dressing/waking up which overlap with the ADLs we use in this paper, and was tested on 5 adults over a short period of 5 days <cit.>.
In that work the target was activity verification where the participants' declarations of their own daily activities were refined with sensor logs and visualised for them for confirmation. The work we report in this paper targets detection of similar activities of daily living but we take a more data-driven approach, are less reliant on participants' self-verification of their activities and our experiments are larger with more participants and over a longer period of data logging.
§ METHODS
The overall aim of the Action Research Cycle Trial (ARC) trial was to investigate the technical performance and participant evaluation of a refined version of the NEX system. Ethical approval to conduct the ARC trial was obtained from the Dublin City University Research Ethics Committee (DCUREC202221) on 25/1/2022. The NEX ARC Trial was advertised through various networks including the Age Friendly University, Dublin City University and NEX study social media platforms. Eligibility criteria to participate in the trial included: demonstrated capacity to provide written consent as determined by a cognitive assessment <cit.>, willingness to provide written informed consent to participate, aged 60 years or over, with or without one or more stable chronic condition/s, fully vaccinated against COVID-19 and had an active Wi-Fi connection at home. Older adult participants were enrolled to the study for a 10-week period if they met the eligibility criteria.
Between January 2022 and July 2022, twenty-six healthy older adults (aged 60 years and over) who were living independently at home in the community participated in the trial. The gender profile was predominantly female (81% n=21) with a total population mean age of 73.2 years. All participants resided in Dublin, Ireland (100% n=26) and the majority lived in urban locations (96% n=25). This was a well-educated sample as 65% (n=17) received third level education. The majority of participants within this sample present as independent and high functioning as only 8% (n=2) reported difficulties in completing activities of daily living (ADLS) such as dressing etc. and only 4% (n=1) reported difficulties in completing more complex tasks defined as instrumental activities of daily living (IADLS) such as shopping for groceries etc. Three participants dropped out and one participant was no longer able to stay involved with the trial as her Wi-Fi connection was deemed too weak to support the NEX system on inspection by the technical engineer during a site visit, resulting in a final sample of n=22.
The research team devised a study design which greatly minimised face-to-face contact with participants in an effort to minimise the risk of COVID-19 spread. This meant that the majority of study visits were completed over Zoom. After enrolment to the trial, participants met with a researcher on Zoom to complete a demographics questionnaire, a questionnaire about technology use, and a compilation of health and well-being assessments. Additionally during these research calls, the researcher completed a home configuration assessment in collaboration with participants. The purpose of this home configuration was to inform the research team about the participant’s home layout and their routine so that decisions about the appropriate placement of IoT sensors and smart plugs could be made. The assessment consisted of a number of questions e.g. the type of home where the participant lived, number of rooms, number of external doors, doors used most often, the layout of the participants' kitchen, which cabinets were used to store food, what appliances were used most frequently, etc.
During a second visit, a researcher and technical engineer visited the participant in their home environment to facilitate the installation of the NEX system technology. The researcher, technician and participant complied with a very strict COVID-19 study protocol which was developed by the research team and consisted of antigen testing prior to, and mask wearing during, home visits. The researcher and technician used home configuration assessment with the participant in Visit 1 to determine the most appropriate placement of preconfigured technology. The NEX system design consisted of a range of IoT technologies, including a smartwatch (for measurement of sleep and step count), voice activated assistant (entertainment and reminder functionality), contact sensors (detecting activity around the home and opening and closing of doors and cupboards), smart plugs (measuring energy use of appliances), motion sensors (detecting movement, temperature, humidity, and light in the home), hub (a central connection point for sensor devices), tablet (display NEX system data to participants), and a cloud hosted secure device management platform.
The technologies were deployed in combination to facilitate the detection of some of the key ADLs from participants’ in-home sensor and smart plug use data over the trial period. Face-to-face training on the technology was provided to participants at the time of installation, and a training manual and a series of training videos were also provided. Throughout the remainder of the ARC trial the researchers met with 19 of the 23 participants individually on two subsequent occasions over Zoom and met with the other 4 once, to present them with a snapshot of raw data that was collected via the NEX technologies in the previous 24 hours. In preparation for these, sensor data for each participant was pre-processed to generate candidate occurrences of ADLs. These were presented to participants for validation and the briefings also included gathering recollections of in-home activities in the day or days immediately preceding the briefings e.g. confirming What time did they have breakfast at? etc. These provided training data for subsequent ADL detection.
At the end of the ARC trial the technical engineer visited the participant in their home and removed all of the technology. During the final research visit, the researcher interviewed participants about their experience of the trial and the NEX technology and completed an assessment of the system acceptability and usability (adapted version of the Technology Acceptance Model <cit.> and System Usability Scale <cit.>. The researcher also repeated the health and wellbeing measures administered at the start of the trial to investigate whether having NEX installed in participants’ homes for the duration of the trial affected their wellbeing and other aspects of life.
While there are many individual ADLs we could focus on, we balanced the value of different ADLs given the characteristics and demographics of the ARC trial participants against the feasibility of detecting ADLs given the sensors which were deployed in their homes. After much consideration and taking the requirements of the clinical partners into consideration we focused on 4 ADLs and grouped each with a set of in-home sensors which could be used to detect them automatically. These ADLs are presented in Table <ref>. Increasing the number of ADLs would not affect the validity of our approach since each additional ADL would be grouped with a set of sensors needed to detect it and each additional ADL would have its own set of rules for detection. Table <ref> shows that the sets of in-home sensors used for each ADL in this work do not overlap but even if they did, that would not affect the performance of ADL detection.
To turn the training data for ADL occurrence into automatic detection of ADLs we examined different machine learning techniques that could be used to build classifiers to recognise ADLs.
Within the field of machine learning, deep learning approaches are regarded as best in terms of accuracy but their downside is that they need much training data in order to be reliable <cit.>. In addition, once the models have been created they cannot offer any explanation for recommendations or outputs that they generate <cit.>. Our application has limited amounts of training data because there are only so many times we can ask participants to indicate when they had eating, sleeping, bathing or other ADL activities before user fatigue sets in and the quality of the annotations deteriorates. Our participants and our clinical partners are also wary of black box machine learning precisely because they have no explanation capabilities.
Association rule mining (ARM) is a machine learning technique which automatically develops conditional rules based on input data such as sensor data readings and annotated training data <cit.>. It is a technique which has been around for many years and used successfully in a wide range of applications <cit.>.
As the name implies, association rules are a series of if/then statements that aid the discovery of relationships between seemingly unrelated data collections. ARM seeks to identify recurring patterns, correlations, or relationships in datasets. A rule generated by the ARM process has two parts, and antecedent and a consequent. An item found in a data collection is called an antecedent, and an item found in combination with an antecedent is called a consequent. For instance consider the following:
“A participant is 90% more likely to watch television when he/she is having breakfast."
In this case, breakfast is the antecedent and watching TV is the consequent in the association rule above.
The process of developing sets of association rules involves carefully reviewing data and searching for recurring if/then patterns. The most significant associations are then determined according to the following two parameters:
* Support which describes how frequently the data collection contains instances of the if/then relationship;
* Confidence which is the number of times these associations have been verified to be accurate iin the data collection.
When processing large datasets using association rule mining, for every conceivable item combination of data items, the Apriori algorithm <cit.> is attractive to use as it scans the data collection only once as it derives a set of association rules. In an earlier phase of our work we validated that the Apriori algorithm can be used successfully to detect ADLs using the data from 7 participants in a friendly trial where we detected kitchen events only <cit.>. The results from the earlier trial indicated that for a given participant we could mine rules for the occurrence of kitchen-based activities if we have training data for occurrences of those activities from data-informed briefings.
In practice the requirement for having to have training data for ADL occurrence is not scalable to larger sets of participants so our aim in generating ADLs in this work is to use the annotations from briefings with ARC participants and apply them unseen to new participants. This consideration also influenced our choice of using association rule mining for ADL detection in the ARC trial.
Processing with the Apriori algorithm for association rule mining required setting minimum values for the support and confidence variables. This should indicate that we are only interested in discovering rules for things that have a minimum value for co-occurrence with other items and have a specific default existence. In this work these values have been set as min_support=0.15 and min_confidence=0.5.
Detecting relatively short-duration activities of daily living requiring a small number of activations of a dependent set of sensors but not in a particular sequence, presented challenges in the temporal domain. For example a participant may take a longer or shorter time to complete any of the ADLs and may activate sensors in a different order each time, for example putting the kettle on first and then preparing the crockery in the morning, and then doing these in the reverse order the in the afternoon.
To address this we used sliding windows to aggregate sensor activations over a set time period of various durations, effectively grouping the sensor data into an order-independent set and thus smoothing out variations in the ordering.
It was crucial to choose a window size that was both small enough to detect individual activities and large enough to reduce noise associated with smaller window sizes.
Analysis of the data-informed briefings with participants provided insights to establish a baseline for the size of the sliding windows for various ADLs and combined with experiments reported later, the window sizes chosen were as shown below:
* For ‘Dressing’ and ‘Leaving House’ the window sizes were 30 minutes;
* For ‘Eating/Drinking’ and ‘Bathing’, the window sizes were 60 minutes.
The shift or stride for ADL detection was set to 5 minutes. That means that the association rules test for the presence of an ADL in a given time window (30 or 60 minutes) and if not present then the window would shift forward by 5 minutes and would re-test.
Our choice of 30 or 60 minutes for window sizes is in line with the work in <cit.> where those authors use window sizes varying from 30 to 120 minutes for the same ADLs as we detect here, though their windows begin and end at fixed times and theirs do not slide and overlap as ours do. Our method accommodates ADLs taking place close to each other in time because each ADL detection runs independently of others. Thus our approach will detect a participant dressing directly after taking a bath, for example. This can be seen later in Figure <ref> where ADL co-occurrences are shown to overlap for a participant.
With the window sizes for ADLs determined and using the training data from participant briefings, association rules for ADL detection from sensor data were generated, initially for each ADL for each participant. To illustrate, the conditions for some ADL detection rules for participant 11 are shown in Figure <ref>. These show that for an Eating/Drinking event the use of any of the kitchen appliances or opening of the doors to the food staples, combined with presence detection, is the trigger. For the Bathing ADL, detecting presence and an increase in humidity within the time window is the trigger while for the Dressing ADL, opening the wardrobe and the underwear drawer is the trigger, for this participant.
If these sensor activation conditions are satisfied within a 60-minute or within a 30-minute time window depending on the ADL, the activity will be labelled as that ADL.
For creating groundtruth training data, the clinical partners met with each of the participants on at least 1 occasion after the sensors had been installed in their homes for a briefing or a data-informed recall on how their deployment was going. During these meetings held over Zoom because of the pandemic, the clinicians gathered data on occurrences of the 5 ADLs that had happened in the previous days, noting the ADL and the timestamp and this recall was prompted by the clinician sharing a visualising of the raw sensor data with the participant on the SafeTRX platform. So seeing sensors for, say, the kitchen being activated in mid-afternoon would prompt the participant to remember that s/he had made tea and had a biscuit during that afternoon which would be recorded as an eating or drinking ADL.
The timings of the clinical partner’s data-informed briefings, and their place in the overall data logging for the ARC participants is shown in Figure <ref> which shows sensor data logging for 159 days from 23 participants. Here we see that 23 participants, all except participants 5, 8, 13, and 23 had two briefings and that the briefings were rarely on consecutive days and most were at least one, and closer to two, weeks apart.
§ RESULTS
We developed a number of versions of using association rule mining to build sets of rules to detect ADLs. This was so we could (1) incrementally determine the best time window sizes to use for different ADLs and (2) include more of participants' validations of candidate ADLs and their suggestions of additional ones from their data-informed briefings. Different versions of the rule mining generated different sets of ADLs for the same participants. It was necessary for the rule generation and the subsequent ADL detection to take place immediately prior to participants having one of their briefings so that some of the candidate ADLs could be presented to them during their interviews.
We started our use of association rule mining using participants' feedback from their first briefing with no candidate ADLs offered to them as we had no training data, and treating each participant independently of others. For their second briefing we offered candidate ADLs generated using training data from their first briefing and these were validated and further training data was gathered during the second briefing. As mentioned earlier, we experimented with varying the sizes of time windows for different ADLs choosing 30 and 60 minutes depending on the ADL and
generating ADLs for each participant based on their own set of rules, independent of others. Finally we used association rule mining to generate a single set of rules for ADL detection which we applied across all participants.
Note that not all participants were used for ADL detection at the all stages of the investigation depending on the timing of their briefings and the availability of their own sensor data as shown in Figure <ref>.
As mentioned above, different deployments of association rule mining generated different ADLs raising the question of whether a new set of activities is better than a previous one. Evaluating the effectiveness of a set of rules can only be done by validating the ADLs it generates against manually annotated training data, to which we have no further access, and we cannot go back to participants to get this. This is a consequence of our focus to have little annotation data from participants from their data-informed briefings which we could use as ground truth for training ADL detection and/or for evaluation of different ADL detection rule sets. Thus our evaluation is done in terms of how the distribution of ADLs generated by a version, appears overall.
An early version of our rule mining is where training data has been generated from 2 data-informed briefings and where each participant’s ADL generation is completed independent of others but with no adjustment of window sizes for resolution of clashing ADLs for the same participant. Figure <ref> shows the raw number of ADLs of each type where eating ADLs dominate, and ADLs have not been generated for all participants at that point because not all ARC installations had been completed. The different numbers of (absolute) ADLs for different participants reflects the fact that participant data logging had been running for different durations for different participants. Figure <ref> shows the proportion of ADLs types for participants for this ARM version and that is a more useful indicator. From this we can see that the leaving house and dressing ADLs was not detected for some participants and the bathing ADL was not detected for any because the humidity sensor in the 6-in-1 bathroom sensor was sampling once every 10 minutes which was insufficient but subsequently corrected.
After several iterations of association rule mining development, our final implementation generates ADL rules from the training data from all participants, uses the optimal window settings for different ADLs and resolves clashes and overlaps between ADLs.
Figures <ref> and <ref> show ADLs generated for all ARC participants. In Figure <ref> the numbers of ADLs per participant are normalised by the total number of days by ARC01 (171 days) taken as the longest duration of all participants for the data capture in this study.
The normalised view helps us draw comparisons across participants for their relative amounts of ADLs, given their numbers of ADLs are for the same logging period. In Figure <ref> we show the relative proportions of ADLs per participant. The figures show some outliers and errors like no “eating or drinking" ADL and a disproportionally high number of “leaving house" ADLs for ARC18 and some participants with no dressing ADL.
When results from all participants were completed we analysed each participant's ADLs individually with the clinician who carried out their data-informed briefing.
For each of the outliers and errors in ADL detection were were able to determine an explanation such as having no training data to work with for a given ADL from the online participant briefings such as a participant not leaving their home recently or having no appropriate deployed sensors for an ADL for a given participant.
The numbers of detected ADLs across participants in Figure <ref> does show a lot of variety. ARC24 shows largest number because of the large number of eating events, similar to ARC26 and is explained as follows.
Figure <ref> shows the ADLs generated for participant ARC24 over the same time period as the raw sensor data shown earlier in Figure <ref>. This shows a regular bathing and dressing activity and a leaving of the house on 5 of the 7 days. March 20 shows the participant not leaving the house though the front door was opened and March 15, 16, 17 and 18 show a lot of front door activity not identified as leaving the house so the participant must have had callers or deliveries. The eating activity is well represented throughout each day because as shown in Figure <ref> this participant does seem to spend large parts of the day in and out of the kitchen, opening and closing the fridge, food presses and drawers. Some of these recognised as the eating/drinking ADL may actually be food preparation or returning from grocery shopping rather than food or drink consumption.
Other observations from Figures <ref> and <ref> show a high number of leaving the house ADLs for some participants, especially ARC18. This can be traced back to the fact that ARC18 had more callers to the front door than others.
As part of the analysis of each participant's ADLs with the clinician who carried out their data-informed briefing as mentioned above, we analysed which of the in-home sensors appeared most often in the rules and which were used most in the triggering of those rules. From this analysis we identified 11 core in-home sensors which should be included for any new participants for whom automatic detection of these ADLs is desired. This set of 11 is driven by their common use across all our ARC participants, and their use in the rule mining for ADL recognition and assumes that the same ADLs are the target for detection. The 11 core sensors are listed in Table <ref>.
§ CONCLUSIONS
This paper describes the data processing carried out on in-home sensor data gathered from 23 participants over periods varying from 6 weeks to 6 months. The sensor data was processed into a set of activities of daily living (ADLs) which were chosen as typical indicators of regular, routine behaviour by the participants. A characteristic of our use case of turning sensor data into ADLs is that there is a limited amount of training data available. Our training data was gathered directly from participants during two online data-informed briefings and corresponds to participants indicating, or validating, an instance of an ADL occurrence as being true and valid. We then used this as input to association rule mining to determine a set of rules for ADL detection.
Our initial sets of ADLs were based on a different set of association rules for each participant and then we fused the training data to generate a set of rules for detecting ADLs across all participants. This means that we can now add additional participants without requiring additional training data by re-using the training data from the pool of 23 ARC trial participants. In this way our ADL detection is scalable and can be made available to others.
One of the unresolved questions about the work in this paper is the end-goal and what to do with detected ADLs. In a clinical setting even the visualisation of ADLs over time has limited capacity to support observations of subtle behaviour changes and degradations. In our future work we will apply the automatic detection of periodicity intensity, namely how strongly or weakly the activities of a participant fits into the regular 24-hour circadian rhythm or the weekly cycle of behaviours, to detected ADLs. It is known that strong rhythmicity in our lives is an indicator of wellness and that degradations in our regular behaviour can be detected automatically as weakening of the strengths of our circadian and other regular rhythms. We have already done this work using raw sensor data as input to periodicity detection <cit.> but believe that using higher level ADLs will give even better detection of behaviour changes.
There are also limitations including the limited number of ADLs (n=4) especially since there is work elsewhere reporting detection and use of larger numbers of ADLs. However our aim was to demonstrate that our technique for ADL detection with limited training data and a limited number of sensors per participant works and can be applied to new participants without the need for additional training data and with acceptable accuracy. This has been demonstrated for 4 ADLs whose detections work independently and in future work we will examine the accuracy of ADL detection when using only the 11 core sensors we identified for future deployments. We also acknowledge that the approach could be improved with further inputs from the caregivers or directly from the participants in their homes as a form of human-in-the-loop active (machine) learning <cit.> where the rules would evolve and improve as more annotations were provided.
In summary, the work reported here has been successful in applying analytics techniques to raw sensor data from participant homes to inform clinical partners about the long-term behaviour and behaviour changes in the routine daily in-home lifestyle and activities of participants. Insights gained from visualising activities at an ADL level rather than at the level of raw sensor data, is more insightful and ultimately beneficial for the participant and the clinician.
The Authors declare that there is no conflict of interest.
This work was supported by the Disruptive Technologies Innovation Fund administered by Enterprise Ireland, project grant number DT-2018-0258 and by Science Foundation Ireland under grant number SFI/12/RC/2289_P2, co-funded by the European Regional Development Fund.
Ethical approval:
The research ethics committee of Dublin City University approved this study (REC number: DCUREC202221) on 25/1/2022.
Guarantor: AFS
Author Contributions:
AFS and CMT researched the literature and AFS, CMT, PH, HL and CM conceived the study. CMT, PH and CM obtained ethical approval and performed subject recruitment and HL and HRV managed the data capture and analysis including visualisations.
HRV implemented the ARM coding.
AFS and CMT wrote the first draft of the manuscript. All authors reviewed and edited the manuscript and approved the final version of the manuscript.
Acknowledgements Not applicable.
The raw sensor data from 23 homes used in this work is publicly available on the Figshare repository at <cit.>.
10
urlstyle
10.1007/s10916-019-1365-7
Baig MM, Afifi S, Gholamhosseini H et al.
A systematic review of wearable sensors and IoT-based monitoring
applications for older adults — a focus on ageing population and
independent living.
J Med Syst 2019; 43(8): 1–11.
10.1007/s10916-019-1365-7.
ADLs
Edemekong P, Bomgaars D, Sukumaran S et al.
Activities of daily living, 2022.
[Updated 2022 Jul 3]. In: StatPearls [Internet]. Treasure Island
(FL): StatPearls Publishing.
lawton1969assessment
Lawton MP and Brody EM.
Assessment of older people: self-maintaining and instrumental
activities of daily living.
The Gerontologist 1969; 9(3_Part_1): 179–186.
nouri1987extended
Nouri F and Lincoln N.
An extended activities of daily living scale for stroke patients.
Clinical Rehabilitation 1987; 1(4): 301–305.
galasko1997inventory
Galasko D, Bennett D, Sano M et al.
An inventory to assess activities of daily living for clinical trials
in Alzheimer's disease.
Alzheimer Disease and Associated Disorders 1997; 11, Suppl 2:
S33–9.
hindmarch1998bayer
Hindmarch I, Lehfeld H, de Jongh P et al.
The Bayer activities of daily living scale (B-ADL).
Dementia and Geriatric Cognitive Disorders 1998; 9 Suppl 2:
20–26.
doi:10.1177/20552076221084468
Bosco A, McGarrigle L, Skelton DA et al.
Make Movement Your Mission: Evaluation of an online digital health
initiative to increase physical activity in older people during the COVID-19
pandemic.
Digital Health 2022; 8.
10.1177/20552076221084468.
PMID: 35295764.
kemper2008meeting
Kemper P, Weaver F, Short PF et al.
Meeting the need for personal care among the elderly: does medicaid
home care spending matter?
Health Services Research 2008; 43(1p2): 344–362.
rothgang2003dependency
Rothgang H and Comas-Herrera A.
Dependency rates and health expectancy.
In European study of long-term care expenditure, Report to the
European Commission, Employment and Social Affairs DG, chapter 14. London:
London School of Economics, 2003.
rashidi2012survey
Rashidi P and Mihailidis A.
A survey on ambient-assisted living tools for older
adults.
IEEE journal of biomedical and health informatics 2012; 17(3):
579–590.
reeder2013framing
Reeder B, Meyer E, Lazar A et al.
Framing the evidence for health smart homes and
home-based consumer health technologies as a public health intervention for
independent aging: A systematic review.
International journal of medical informatics 2013; 82(7):
565–579.
queiros2015usability
Queirós A, Silva A, Alvarelhão J et al.
Usability, accessibility and ambient-assisted living:
a systematic literature review.
Universal Access in the Information Society 2015; 14: 57–66.
blackman2016ambient
Blackman S, Matlo C, Bobrovitskiy C et al.
Ambient assisted living technologies for aging well:
a scoping review.
Journal of Intelligent Systems 2016; 25(1): 55–69.
cicirelli2021ambient
Cicirelli G, Marani R, Petitti A et al.
Ambient assisted living: a review of technologies,
methodologies and future perspectives for healthy aging of population.
Sensors 2021; 21(10): 3549.
chen2011knowledge
Chen L, Nugent CD and Wang H.
A knowledge-driven approach to activity recognition
in smart homes.
IEEE Transactions on Knowledge and Data Engineering 2011;
24(6): 961–974.
caroux2014verification
Caroux L, Consel C, Dupuy L et al.
Verification of daily activities of older adults: a
simple, non-intrusive, low-cost approach.
In Proceedings of the 16th international ACM SIGACCESS
conference on Computers & accessibility. pp. 43–50.
caroux2018towards
Caroux L, Consel C, Dupuy L et al.
Towards context-aware assistive applications for
aging in place via real-life-proof activity detection.
Journal of ambient intelligence and smart environments 2018;
10(6): 445–459.
belloum2021tooled
Belloum R, Riche A, Volanschi N et al.
A tooled method for developing knowledge-based
activity recognizers.
In 2021 IEEE SmartWorld, Ubiquitous Intelligence & Computing,
Advanced & Trusted Computing, Scalable Computing & Communications, Internet
of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/IOP/SCI).
IEEE, pp. 17–24.
borson2000mini
Borson S, Scanlan J, Brush M et al.
The mini-cog: a cognitive ‘vital signs’ measure for dementia
screening in multi-lingual elderly.
International Journal of Geriatric Psychiatry 2000; 15(11):
1021–1027.
davis1989user
Davis FD, Bagozzi RP and Warshaw PR.
User acceptance of computer technology: A comparison of two
theoretical models.
Management Science 1989; 35(8): 982–1003.
brooke1996sus
Brooke J et al.
SUS-a quick and dirty usability scale.
Usability evaluation in industry 1996; 189(194): 4–7.
zhang2021understanding
Zhang C, Bengio S, Hardt M et al.
Understanding deep learning (still) requires rethinking
generalization.
Communications of the ACM 2021; 64(3): 107–115.
gilpin2018explaining
Gilpin LH, Bau D, Yuan BZ et al.
Explaining explanations: An overview of interpretability of machine
learning.
In 2018 IEEE 5th International Conference on Data Science and
Advanced Analytics (DSAA). IEEE, pp. 80–89.
solanki2015survey
Solanki SK and Patel JT.
A survey on association rule mining.
In 2015 Fifth International Conference on Advanced Computing &
Communication Technologies. IEEE, pp. 212–216.
10.5555/3000292.3000305
Liu B, Hsu W and Ma Y.
Integrating classification and association rule mining.
In Proceedings of the Fourth International Conference on
Knowledge Discovery and Data Mining. KDD'98, AAAI Press, p. 80–86.
al2014improved
Al-Maolegi M and Arkok B.
An improved apriori algorithm for association rules.
arXiv preprint arXiv:14033948 2014; .
orla
Keogh O, Lee H, Timon CM et al.
Detecting activities of daily living using ambient sensors and
association rule mining.
PLos Digital Health 2023; Under Review.
smeaton2023
Smeaton AF and Hu F.
Periodicity Intensity Reveals Insights into Time
Series Data: Three Use Cases.
Algorithms 2023; 16(2).
10.3390/a16020119.
<https://www.mdpi.com/1999-4893/16/2/119>.
monarch2021human
Monarch RM.
Human-in-the-Loop Machine Learning: Active learning and
annotation for human-centered AI.
"New York": Manning Publications, 2021.
Timon2022
Timon CM, Hussey P, Lee H et al.
Raw sensor data from in-home sensors in 23 homes of older citizens,
2022.
10.6084/m9.figshare.21415836.v1.
|
http://arxiv.org/abs/2307.04091v1 | 20230709042412 | CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation | [
"Jun Cen",
"Shiwei Zhang",
"Yixuan Pei",
"Kun Li",
"Hang Zheng",
"Maochun Luo",
"Yingya Zhang",
"Qifeng Chen"
] | cs.CV | [
"cs.CV"
] |
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation
^1Authors are with Cheng Kar-Shun Robotics Institute, The Hong Kong University of Science and Technology, Hong Kong SAR, China. jcenaa}@connect.ust.hk. {cqf}@ust.hk.
^2Authors are with Alibaba Group, China. zhangjin.zsw, zh334251, luomaochun.lmc, yingya.zyy}@alibaba-inc.com. lk158400}@cainiao.com.
^3Authors are with the SMILES LAB at the School of Information and Communication Engineering'an Jiaotong University, Xi'an, China. peiyixuan}@stu.xjtu.edu.
^*Work done as an intern at Alibaba DAMO Academy.
Jun Cen^1,2*, Shiwei Zhang^2, Yixuan Pei^3, Kun Li^2, Hang Zheng^2, Maochun Luo^2, Yingya Zhang^2, Qifeng Chen^1
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
2D RGB images and 3D LIDAR point clouds provide complementary knowledge for the perception system of autonomous vehicles. Several 2D and 3D fusion methods have been explored for the LIDAR semantic segmentation task, but they suffer from different problems. 2D-to-3D fusion methods require strictly paired data during inference, which may not be available in real-world scenarios, while 3D-to-2D fusion methods cannot explicitly make full use of the 2D information. Therefore, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion) in this work. Our method has two contributions. First, our bidirectional fusion scheme explicitly and implicitly enhances the 3D feature via 2D-to-3D fusion and 3D-to-2D fusion, respectively, which surpasses either one of the single fusion schemes. Second, we distillate the 2D knowledge from a 2D network (Camera branch) to a 3D network (2D knowledge branch) so that the 3D network can generate 2D information even for those points not in the FOV (field of view) of the camera. In this way, RGB images are not required during inference anymore since the 2D knowledge branch provides 2D information according to the 3D LIDAR input. We show that our CMDFusion achieves the best performance among all fusion-based methods on SemanticKITTI and nuScenes datasets. The code will be released at https://github.com/Jun-CEN/CMDFusion.
§ INTRODUCTION
3D LIDAR is significant for the perception system of autonomous vehicles, and one of the applicable tasks with LIDAR is semantic segmentation. Great efforts have been made for better LIDAR semantic segmentation performance using single LIDAR modality <cit.>. Recently, several multi-modality methods are developed <cit.> to fuse the features of LIDAR and colorful cameras since they provide complementary information. LIDAR provides reliable depth information and is robust to light conditions such as dark nights, while the camera offers a dense colorful appearance and fine-grained textures. In this work, we also aim to study how to effectively leverage these two modality data for better LIDAR semantic segmentation.
Existing fusion-based methods can be divided into 2D-to-3D fusion method (PMF <cit.>) and 3D-to-2D fusion method (2DPASS <cit.>), as shown in Fig. <ref> (a) and (b). PMF injects 2D knowledge into the LIDAR features, so it needs strictly paired data during training and inference. However, the FOV of LIDAR and the camera may not totally overlap with each other, so those points out of the FOV of the camera cannot be tested. For example, SemanticKITTI <cit.> only provides two front-view images, and points at the side and back cannot be involved in the PMF framework. 2DPASS notices this problem and proposed injecting 3D features into 2D features during training to implicitly enhance the 3D features. In this way, 2DPASS does not require images during inference. However, 3D features do not explicitly contain 2D information in such a 3D-to-2D scheme.
To solve the mentioned problems of 2D-to-3D and 3D-to-2D fusion methods, we propose a Bidirectional Fusion Network with
Cross-Modality Knowledge Distillation (CMDFusion), as shown in Fig. <ref> (c). Specifically, on the one hand, we propose a Bidirectional Fusion Block (BFB) to explicitly and implicitly enhance the 3D features through 2D-to-3D and 3D-to-2D injection, which owns the benefits of both single fusion schemes. On the other hand, we propose a Cross-Modality Distillation (CMD) module to let a 3D network (2D knowledge branch) memorize the information of the 2D network (camera branch) during training. During inference, the 2D knowledge branch provides the 2D image information based on the 3D LIDAR point cloud inputs so that we can obtain the 2D knowledge for the whole point cloud, including those points not in the FOV of the camera.
We evaluate our method on two challenging datasets, including SemanticKITTI <cit.> and NuScenes <cit.>. Experiments show that our method achieves the best performance among all fusion-based methods. In summary, our contributions include the following:
* We develop a bidirectional fusion method CMDFusion for the LIDAR semantic segmentation task, which surpasses the single directional 2D-to-3D fusion and 3D-to-2D fusion methods.
* We develop a cross-modality distillation module to generate 2D information for those points that are out of the FOV of the camera.
* We experimentally show that our method achieves the best performance among fusion-based methods on SemanticKITTI and Nuscenes datasets.
§ RELATED WORK
3D LIDAR semantic segmentation has grown very fast based on well-annotated public datasets, such as SemanticKITTI <cit.> and NuScenes <cit.>. Most methods in this area are single-modality, i.e., only use LIDAR point cloud to extract information. Specifically, single-modality methods can be categorized into point-based, projection-based, voxel-based, and multi-view fusion methods.
1) Point-based methods <cit.> adapt PointNet <cit.> and PointNet++ <cit.> to the LIDAR domain. These point-based methods do not generalize very well in the LIDAR point cloud scenarios since their sampling and searching algorithms cannot perfectly handle the sparse outdoor point clouds.
2) Voxel-based methods divide the whole point cloud into voxels <cit.> and apply efficient 3D convolution for semantic segmentation like SparseConv <cit.>. Cylinder3D <cit.> proposed a cylindrical partition and asymmetrical 3D convolutional network which follows the geometry structure of the LIDAR point cloud.
3) Projection-based methods first project 3D LIDAR point cloud into 2D range-view images <cit.> or bird’s-eye-view (BEV) images <cit.> and then apply 2D convolution network for semantic segmentation. However, such a projection inevitably loses some of the 3D geometry information.
4) Multi-view fusion methods combine different views of the LIDAR point cloud as inputs. FusionNet <cit.> and SPVCNN <cit.> fuse voxel and point level information, while RPVNet <cit.> fuses the information of voxel, point, and range views.
Recently, multi-modality fusion has become popular in the autonomous driving area. In the 3D object detection task, BEV fusion <cit.> unifies the LIDAR and image features in the BEV space and achieves the state-of-the-art performance. However, the height information is much more critical in the semantic segmentation task than the object detection task, so the BEV-based method <cit.> has limited performance on the semantic segmentation task. Instead, PMF <cit.> projects the LIDAR point cloud into the image space and then conducts 2D-to-3D fusion for better 3D feature representation. 2DPASS <cit.> finds that the 2D-to-3D fusion method like PMF can only be applied on the points in the overlapping FOVs of the LIDAR and camera, so 2DPASS conducts 3D-to-2D fusion to strengthen the 3D features by supervising the 3D features from the 2D branch.
Compared to PMF and 2DPASS, our bidirectional fusion network enjoys the benefits of both 2D-to-3D and 3D-to-2D fusion schemes. Besides, we propose a cross-modality distillation module so that our network can be applied to the whole LIDAR point cloud, including the points that are out of the FOV of the camera.
§ METHODOLOGY
§.§ Framework Overview
The simplified and specific overall structure of our proposed CMDFusion is shown in Fig. <ref> (c) and Fig. <ref> (a), respectively. Our CMDFusion is composed of three branches, including a camera branch (2D network), a 2D knowledge branch (3D network), and a 3D LIDAR branch (3D network).
§.§.§ Training
During training, the 2D knowledge branch (a 3D network) learns the 2D image information from the camera branch (a 2D network) via Cross-Modality Distillation (CMD). Although the CMD is conducted on those points in the overlapping FOVs of the LIDAR and camera, the 2D knowledge branch can be generalized to the points that are out of the FOV of the camera. In this way, we can obtain the 2D information of the whole point cloud, which is not approachable in PMF <cit.> or 2DPASS <cit.>. Then we fuse the features of the 2D knowledge branch and 3D LIDAR branch through Bidirectional Fusion Block (BFB). On the one hand, 2D-to-3D directional fusion explicitly enhances the 3D feature via 2D information injection. On the other hand, 3D-to-2D directional fusion implicitly improves the robustness of the 3D feature since it is required to have the potential to be well adapted to the 2D space. Therefore, our BFB enjoys the benefits of both PMF and 2DPASS.
§.§.§ Testing
During inference, the camera branch is not needed anymore since its knowledge is already distilled to the 2D knowledge branch. Besides, only 2D-to-3D directional fusion is involved as the final prediction results come from the 3D LIDAR branch. The right-hand side of Fig. <ref> (c) shows the parts that are needed during inference.
§.§ Point-to-pixel Corrspondence
Point-to-pixel correspondence is the pre-request of Cross-Modality Distillation (CMD). Given a LIDAR point cloud P = {p_i}_i=1^N ∈ℝ^N× 3, where p_i = (x_i, y_i, z_i) ∈ℝ^3 refers to the XYZ coordinates of a point and N is the number of points in the point cloud, the projected 2D coordinates of the point p_i is calculated as:
[u_i, v_i, 1]^T = 1/z_i× K× T × [x_i, y_i, z_i, 1]^T,
where K ∈ℝ^3× 4 and T ∈ℝ^4× 4 denote the intrinsic and extrinsic matrices of the camera, respectively. Then we have p̂_i =(⌊ v_i⌋ , ⌊ u_i ⌋ ) ∈ℝ^2 as the integer projected 2D coordinates, where ⌊·⌋ is the floor operation. For the SemanticKITTI dataset, K and T are already given. For the NuScenes dataset, the extrinsic matrix T is calculated as:
T=T_C←ego_t_c×
T_ego_t_c←G×
T_G←ego_t_l×
T_ego_t_l←L,
where L, C, and G refer to the LIDAR, camera, and global.
Note that CMD is only applied on the points that are in the overlapping FOVs of LIDAR and camera, as shown in the colorized region in the input of the 2D knowledge branch in Fig. <ref> (a). Formally, suppose the points set in the overlapping FOVs of LIDAR and camera is P^O = {p_i}_i=1^N^O∈ℝ^N^O× 3, where N^O denotes the number of points in the overlapping FOVs of the LIDAR and camera, then for each point p_i in P^O, its corresponding projected coordinates p̂_i =(⌊ v_i⌋ , ⌊ u_i ⌋ ) should meet:
{[ 0 ≤⌊ v_i⌋≤ H; 0 ≤⌊ u_i⌋≤ W, ].
where H and W refer to the height and width of corresponding images. Note that for feature maps under different scales, we first upsample the feature maps to the original scale and then use the corresponding point-to-pixel corresponding.
§.§ Cross-Modality Distillation
Cross-Modality Distillation (CMD) is to distillate the 2D knowledge from the camera branch (a 2D network) to the 2D knowledge branch (a 3D network), so we can generate the 2D information for those points out of the FOV of the camera and do not need the images during inference.
§.§.§ Camera Branch
Unlike PMF <cit.> and 2DPASS <cit.> that train the camera branch with the ground truth projected from the LIDAR point cloud, we use a ResNet101 <cit.> which is pre-trained on the Cityscapes dataset <cit.>. Cityscapes is a popular dataset for 2D image semantic segmentation in the autonomous driving scenario. We adopt this strategy for two reasons. First, if we use the ground truth which is projected from the LIDAR point cloud, the camera branch may learn the overlapping knowledge with the 3D LIDAR branch since they share the same ground truth source. In contrast, the pre-trained camera branch using another dataset could provide additional information on top of the LIDAR point cloud. Second, we could freeze the camera branch during training since it is well-trained, so less back-propagation is needed for the whole structure. In this way, the training process consumes less GPU memory and time.
§.§.§ 2D Knowledge Branch
Following 2DPASS <cit.>, we use SPVCNN <cit.> as the 3D network used in this paper, including the 2D knowledge branch and 3D LIDAR branch. Now let us formulate the process of CMD.
For points in the overlapping FOVs of LIDAR and camera p_i ∈ P^O, we feed them into the 2D knowledge branch f_2D to obtain the features z_2D^s:
z_2D^s={ f_2D^s(p_i) }_i=1^N^O∈ℝ^N^O× d,
where s={1,2,3,4 } and d refer to the feature map scale and the dimension of the features, respectively. Then we obtain the corresponding features z_C^s of P^O from the camera branch through the point-to-pixel projection described in Sec. <ref>. The CMD is realized through this loss ℒ_CMD:
ℒ_CMD = 1/N^O∑ z_2D^s - z_C^s _2,
where ·_2 denotes the L2 loss. In this way, the 2D knowledge branch can mimic the function of the camera branch to provide the 2D information based on the 3D LIDAR point cloud. Although ℒ_CMD is only available for P^O during training, the trained 2D knowledge branch can be generalized to the whole point cloud P during inference.
§.§ Bidirectional Fusion
Our bidirectional fusion block (BFB) is composed of a 3D-to-2D fusion block and a 2D-to-3D fusion block, as shown in Fig. <ref> (b). 2D-to-3D directional fusion explicitly enhances the 3D features via 2D feature injection, while 3D-to-2D implicitly enhances the 3D features via 2D knowledge branch supervision. Note that the 3D-to-2D fusion block and 2D-to-3D fusion block share the same single directional fusion structure, as shown in Fig. <ref> (c), and the only difference is the input position. Fig. <ref> (c) is the example of the 3D-to-2D single directional fusion block, and we can obtain the 2D-to-3D single directional fusion block by simply changing the positions of two inputs in Fig. <ref> (c). Unlike CMD which can only be applied on the P^O, BFB is applied on the whole point cloud. So z_2D^s ∈ℝ^N× d and z_3D^s∈ℝ^N× d in this section.
§.§.§ 3D-to-2D Fusion
3D-to-2D fusion is illustrated in Fig. <ref> (c). Formally, we first have:
z_3D2D^s = _2((_1(z_3D^s), z_2D^s)),
where is a multiplayer perceptron, and refers to the feature concatenation. _1 is used to transfer the 3D feature z_3D^s into the 2D feature space. _2 is responsible to transfer the concatenated feature into the residual space of z_2D^s. Then we have:
z̃_2D^s = z_2D^s ⊕σ(_3(((z_3D2D^s),z_3D2D^s))) ⊙ z_3D2D^s,
where ⊕ and ⊙ denote the element-wise plus and element-wise multiply, respectively. means global average pooling, and σ means Sigmoid activation function. is used to integrate the gloable information, and _3 is used to transfer the feature into the attention value. z̃_2D^s represents the enhanced 2D features of scale s. Then we concatenate z̃_2D^s and the enhanced features of previous scales z_2DF^s-1 to obtain z_2DF^s:
z_2DF^s = (z_2DF^s-1,z̃_2D^s),
where z_2DF^s contains all enhanced 2D features from scale 1 to s. Finally, z_2DF^4 contains the enhanced 2D features of all 4 scales, and we use a linear classifier g_2D to output the logits. The loss of 2D knowledge branch ℒ_2D is formulated as:
ℒ_2D = -1/N∑ ylog(g_2D(z_2DF^4)_y),
where y refers to the ground truth, and g(z_2DF^4)_y denotes the y^th logit of g(z_2DF^4). Note that single directional fusion does not share MLPs for different scales.
§.§.§ 2D-to-3D Fusion
2D-to-3D fusion shares the symmetric structure with 2D-to-3D fusion. Formally, we have the following:
z_2D3D^s = _2( (_1(z_2D^s), z_3D^s)),
z̃_3D^s = z_3D^s ⊕σ(_3(( (z_2D3D^s),z_2D3D^s))) ⊙ z_2D3D^s,
z_3DF^s = ( z_3DF^s-1,z̃_3D^s).
Similarly, z_3DF^4 is the final enhanced 3D feature, and a linear classifier g_3D is used to output the logits. The loss of 3D knowledge branch ℒ_3D is formulated as:
ℒ_3D = -1/N∑ ylog(g_3D(z_3DF^4)_y).
Note that 2D-to-3D fusion blocks do not share MLPs and classifiers with 3D-to-2D fusion blocks.
§.§ Overall Training and Testing Process
§.§.§ Training
The overall loss ℒ_all for training the model is calculated as:
ℒ_all = ℒ_CMD + ℒ_2D + ℒ_3D.
§.§.§ Testing
We use the output of the classifier in the 3D LIDAR branch as the final prediction results. Specifically, the prediction result ŷ is:
ŷ = max_i=1,2,...,C g_3D(z_3DF^4)_i,
where C denotes the total number of classes in the dataset.
§ EXPERIMENTS
§.§ Experiment Settings
§.§.§ Datasets
We conduct experiments on three large-sclae outdoor datasets, including SemanticKITTI <cit.>, SemanticKITTI-O <cit.> and Nuscenes <cit.>. SemanticKITTI provides the dense segmentation labels for 00-10 sequences, in which sequence 08 is used for validation and others are used for training. The ground truth of sequences 11-21 is not reachable to the public and is used for testing. Two front-view colorful images are equipped with each LIDAR scan in SemanicKITTI. We use the image captured by the left camera in our experiments. NuScenes contains 8130 samples for training, 6019 samples for validation, and 6008 samples for testing. Six images are equipped for every LIDAR scan in Nuscenes, and we randomly pick up one image for training. SemanicKITTI-O is a subset of SemanticKITTI, which contains the points in the overlapping FOVs of the camera and LIDAR. The reason that PMF <cit.> proposed the SemanicKITTI-O is that PMF cannot be applied on the points that are out of the FOV of the camera because of its 2D-to-3D fusion scheme.
§.§.§ Evaluation Metrics
We adopt the commonly used mean intersection-over-union (mIoU) of all classes as the evaluation metric. Specifically, mIoU is formulated as:
mIoU = TP_c/TP_c + FP_c + NP_c.
In addition, we also report the frequency-weighted IOU (fwIoU) provided by the NuScenes leaderboard. FwIoU is a weighted version of mIoU by the point-level frequency of different classes.
§.§.§ Network Settings
The camera branch is a ResNet101 <cit.> network pre-trained using Cityscpaes <cit.> dataset. Following 2DPASS <cit.>, the 2D knowledge branch and 3D LIDAR branch are two modified SPVCNN <cit.> with the same structure. The feature maps from three branches are firstly reduced to the dimension of 128 and 256 for SemanticKITTI and NuScenes datasets, and then they are upsampled through bilinear interpolation to the original scale and used for CMD and BFB. As shown in Fig. <ref> (a), we use feature maps from 4 scales for better performance.
§.§.§ Training and Inference Details
Our model is trained in an end-to-end manner with the SGD optimizer. The initial learning rate is set to be 0.24, following 2DPASS <cit.> and SPVCNN <cit.>. We train the model for 128 epochs for SemanticKITTI and 80 epochs for NuScenes dataset. We use the commonly used augmentation strategy in the LIDAR semantic segmentation, including global scaling with a random scaling factor sampled from [0.95, 1.05], and global rotation around the Z axis with a random angle. Image augmentation includes horizontal flipping and color jitter. The cropped image size is 1200 × 360 (W × H) for SemanticKITTI and 400 × 240 for NuScenes. The voxel size in the 2D knowledge branch and 3D LIDAR branch is set to 0.1. We train our model with batch size 8 on 2 Nvidia Tesla A100 GPUs with 80G memory.
§.§ Results on Benchmarks
§.§.§ Results on SemanticKITTI-O
PMF <cit.> provides the comprehensive benchmark on the SemanticKITTI-O validation set, as shown in Table <ref>. The traditional 2D-to-3D fusion methods like PointPainting <cit.>, RGBAL <cit.>, and PMF conduct both training and inference based on the LIDAR and camera modality data, while our CMDFusion is trained on the LIDAR and camera pairs, but does not require the camera data during inference. We can see that our method significantly surpasses the PMF method by 6.2 mIoU. Note that our CMDFusion can be trained on the whole SemanticKITTI dataset based on our 2D knowledge branch and CMD, while PointPainting, RGBAL, and PMF can be only trained on the training set of SemanticKITTI-O due to their 2D-to-3D fusion scheme.
§.§.§ Results on SemanticKITTI
Similar to 2DPASS <cit.>, our CMDFusion is trained on the LIDAR and camera modality, while only LIDAR modality is required during inference, so 2DPASS and our CMDFusion can be tested on the whole LIDAR point cloud. However, our CMDFusion includes both 2D-to-3D and 3D-to-2D fusion while 2DPASS only includes 3D-to-2D fusion, so our method surpasses the 2DPASS according to Table <ref>. Note that 2DPASS only released the codebase and the checkpoint without the validation set involved in the training set and instance-level augmentation, so we retrain their model following the same setting and evaluate on the test set. We also try their released checkpoint on the test set and find that both of them achieve a similar mIoU (67.7). We follow the same setting for fair comparison and our method achieves the better performance (68.6 mIoU). We also try the instance-level augmentation from Polarmix <cit.> on 2DPASS and our method, and our method still surpasses the 2DPASS by 0.6 mIoU. Note that since 2DPASS does not release the code to reproduce the performance reported in their paper, we only compare with them under the same training settings, where our method achieves the better performance. To avoid the mis-correspondence between images and LIDAR point cloud brought by the instance-level augmentation, we do not involve the camera branch during finetuning, and use the frozen 2D knowledge branch to provide 2D information and only finetune the 3D LIDAR branch. In general, our method achieves the best performance among all public methods.
§.§.§ Results on NuScenes
Table <ref> shows that our method achieves better performance (2.0 mIoU) than 2DPASS. Similar to the SemanticKITTI, the performance of 2DPASS comes from the higher one between our retrained model and their released checkpoint. Unlike the SemanticKITTI dataset, the NuScenes dataset provides 6 images to cover the FOV of the LIDAR, so the 2D-to-3D fusion methods like PMF <cit.> and 2D3DNet <cit.> can also be evaluated on the whole LIDAR point cloud. Among all fusion-based methods, our CMDFusion achieves the best performance.
§.§.§ Visualization
We provide two samples from SemanticKITTI and NuScenes datasets in Fig. <ref>. The top sample shows that 2DPASS and our method have less error on the building compared to the SPVCNN, which illustrates the effectiveness of multi-modality fusion. Besides, our method has better results on the car and truck than 2DPASS, because 2D-to-3D fusion is involved in our method but not in the 2DPASS. In addition, we visualize the feature representation of 2DPASS and our method on the NuScenes dataset. As shown in Fig. <ref>, our method has more discriminative features, e.g., the pedestrian class is more separable in our method than 2DPASS.
§.§ Runtime Analysis
Table <ref> provides the runtime analysis on the NuScenes dataset. PointPainting, RGBAL, and PMF use 2D networks for semantic segmentation since the input is range-view or perspective-view, so they can be accelerated using TensorRT by a large margin (125.0 to 22.3 ms for the PMF method). In contrast, the 3D network in Cylinder3D, 2DPASS, and our method cannot be accelerated by TensorRT. Compared to PMF without TensorRT, our method has a smaller number of FLOPs and parameters during inference, while sharing the same runtime. Compared to 2DPASS, our method achieves better performance since two 3D networks are used during inference (2D Knowledge branch and 3D LIDAR branch), which inevitably consumes more runtime.
§.§ Ablation Study
We conduct a careful ablation study to show the effectiveness of different modules in our method. The comprehensive ablation results are based on the Semantic-O dataset since the classical 2D-to-3D fusion without CMD can only be applied on the points in the overlapping FOVs of LIDAR and camera. The results are in Table <ref>. The baseline refers to a single SPVCNN 3D network. We can see that both 3D-to-2D fusion and 2D-to-3D fusion are helpful, but 2D-to-3D fusion brings more performance gain since the camera information is explicitly injected into the LIDAR branch. After we replace the camera branch (CB) with a frozen CB pre-trained on Cityscapes, the performance is further improved. The reason may be that the pre-trained camera branch could provide additional information for the current LIDAR point cloud dataset. Then we introduce cross-modality distillation (CMD) to let a 3D network output the 2D information so that the model could be trained on the whole dataset rather than the overlapping FOVs of the camera and LIDAR. As a result, the performance is greatly boosted by the CMD. Similar to 2DPASS, we also apply the voting test-time augmentation (TTA), i.e., rotating the input point cloud with 12 angles around the Z axis and averaging the prediction scores as the final outputs. TTA brings better performance by 2.46 mIoU.
§ CONCLUSION
In this paper, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion) to fuse the information of the camera and LIDAR for better LIDAR semantic segmentation. Compared to the 2D-to-3D fusion-based method PMF <cit.>, our proposed Cross-Modality Distillation (CMD) module solves the problem that the camera branch cannot output the 2D information for those points out of the FOV of the camera. Compared to 3D-to-2D fusion-based method 2DPASS <cit.>, our proposed Bidirectional Fuision Block (BFB) contains additional 2D-to-3D fusion, which explicitly strengthens the 3D information through 2D information injection for better LIDAR semantic segmentation. We show the effectiveness of our proposed method through comprehensive experiments on SemanticKITTI and NuScenes datasets. Overall, we provide an alternative approach to fully utilize the multi-modality information for 3D semantic segmentation, and introduce a new and feasible way to solve the problem that multi-sensors' FOVs are not overlapping. We hope this paper can provide inspiration for future work in autonomous vehicles and robots.
§ ACKNOWLEDGMENT
This work is supported by Alibaba Group through Alibaba Research Intern Program.
IEEEtran
|
http://arxiv.org/abs/2307.04682v1 | 20230710163546 | Properties underlying the variation of the magnetic field spectral index in the inner solar wind | [
"J. R. McIntyre",
"C. H. K. Chen",
"A. Larosa"
] | astro-ph.SR | [
"astro-ph.SR",
"physics.plasm-ph",
"physics.space-ph"
] |
0000-0001-9763-9414]Jack R. McIntyre
Department of Physics and Astronomy, Queen Mary University of London, London, E1 4NS, UK
0000-0003-4529-3620]Christopher H. K. Chen
Department of Physics and Astronomy, Queen Mary University of London, London, E1 4NS, UK
0000-0002-7653-9147]A. Larosa
Department of Physics and Astronomy, Queen Mary University of London, London, E1 4NS, UK
Using data from orbits one to eleven of the Parker Solar Probe (PSP) mission, the magnetic field spectral index was measured across a range of heliocentric distances.
The previously observed transition between a value of -5/3 far from the Sun and a value of -3/2 close to the Sun was recovered, with the transition occurring at around 50 R_⊙ and the index saturating at -3/2 as the Sun is approached. A statistical analysis was performed to separate the variation of the index on distance from its dependence on other parameters of the solar wind that are plausibly responsible for the transition; including the cross helicity, residual energy, turbulence age and the magnitude of magnetic fluctuations. Of all parameters considered the cross helicity was found to be by far the strongest candidate for the underlying variable responsible. The velocity spectral index was also measured and found to be consistent with -3/2 over the range of values of cross helicity measured. Possible explanations for the behaviour of the indices are discussed, including the theorised different behaviour of imbalanced, compared to balanced, turbulence.
§ INTRODUCTION
The solar wind is known to contain a turbulent cascade and it is proposed that this cascade plays a role in its heating and acceleration <cit.>. An important diagnostic of the turbulence is the spectral index, α, defined by E(k) ∝ k^α, where E(k) is the trace power spectrum and k is wavenumber. The Parker Solar Probe (PSP) mission <cit.> allows us to measure this index across an unprecedented range of heliocentric distances and environments, having already reached a heliocentric distance of less than 14 R_⊙. Using data from its first two orbits, <cit.> found the magnetic field spectral index, α_B, to vary with heliocentric distance, appearing consistent with -5/3 at 0.6 au but consistent with -3/2 at 0.17 au. Whether the spectrum would continue to shallow for measurements closer to the Sun was unclear. Using later encounters <cit.> and <cit.> found the same transition in α_B across a wider range of distances.
The two extremes of the transition, -5/3 and -3/2, are common predictions for α_B from theoretical models of MHD turbulence. As these predictions are arrived at by making different assumptions about the turbulence, the value of the index can give us insight into its physics. To construct such models it is useful to consider the turbulence in terms of the Elsasser variables, defined as δz^± = δv±δb, where δv is the perturbation to the velocity field and δb = δB / √(μ_0 ρ), where ρ is the density, is the perturbation to the magnetic field in velocity units <cit.>. From the ideal MHD equations, the evolution of these Elsasser variables is given by
∂_t δz^±∓ (𝐕_A·∇) δz^± + (δz^∓·∇) δz^± = -∇p̃,
where 𝐕_A is the Alfvén velocity and p̃ is the total pressure, the sum of the plasma pressure and the magnetic pressure. From this, perturbations to the Elsasser fields can be viewed as wave packets travelling along the background field at the Alfvén speed, with δz^+ and δz^- corresponding to travel in opposite directions. Since the nonlinear term in Equation (<ref>) requires the presence of both variables to be non-zero, MHD turbulence can be viewed in terms of the interaction of these counter propagating wave packets <cit.>.
In the Iroshnikov-Kraichnan model <cit.> the turbulence is taken to be isotropic and, in the picture described above, a wave packet must interact with many others to be significantly deformed. In other words the characteristic propagation time of the wave packets is shorter than the nonlinear time of their interactions, this is known as weak turbulence. The resulting spectral index is -3/2. However, solar wind turbulence is anisotropic <cit.>. This is accounted for in the <cit.> model, where the wave packets are elongated along the background magnetic field such that only one interaction of wave packets is necessary to significantly deform those wave packets. Here the turbulence is critically balanced, where the propagation and nonlinear times are taken to be equal — this condition has been observed to hold in the solar wind <cit.>. The result is a -5/3 scaling with respect to k_⊥, the wavenumber perpendicular to the background field, and a -2 scaling with respect to k_∥, the wavenumber parallel to the background field. Consistent with this, <cit.> reported a k_∥^-2 scaling in the solar wind. To this picture the <cit.> model adds that the angular alignment of the velocity and magnetic field fluctuations is dependent on scale, giving a k_⊥^-3/2 scaling and resulting in the wave packets having a three-dimensional anisotropic structure. Observations of the solar wind <cit.> and simulations <cit.> provide mixed evidence for such alignment of fluctuations or 3D anisotropic structure in MHD turbulence. All these models assume homogeneous background conditions, the picture becomes more complicated when gradients in these conditions are considered <cit.>. Further, these models assume the energy in the two Elsasser fields to be of comparable magnitude, this is known as balanced turbulence.
It is known that the level of imbalance in energy between the two Elsasser fields varies with heliocentric distance <cit.> and α_B has been found to depend on the cross helicity, a measure of the imbalance, and residual energy at 1 au <cit.>. <cit.> found a dependence of α_B with cross helicity across the distance range provided by PSP. Further, some theoretical models <cit.> and simulations <cit.> suggest imbalanced turbulence behaves differently from balanced turbulence, though this has been disputed <cit.>. The level of imbalance therefore appears as a clear, plausible parameter behind the observed transition in α_B with distance, however, no previous theoretical work predicts this particular effect.
In contrast, the velocity spectral index, α_v, has been found to be consistent with -3/2 as cross helicity is varied at 1 au <cit.> and to not vary with distance across the distance range provided by PSP <cit.>. However, <cit.> reported α_v to evolve with heliocentric distance from -3/2 at 1 au to -5/3 at distances of several au, with some evidence of shallower spectra being associated with regions of high cross helicity.
An alternative explanation for the transition in α_B could lie in the fact that plasma at greater radial distances has had a greater number of nonlinear times pass during its journey from the Sun. It, therefore, might be argued that the transition is reflective of the turbulence evolving during its journey from an earlier transient state. <cit.> suggested that the turbulence age, a parameter characterising this effect, could be behind the transition after finding variation of the index with both solar wind speed and radial distance. However, <cit.> found that, for distances as close as 35.7 R_⊙, the travel time from the Sun is much greater than the outer scale nonlinear time so the turbulence should already be well evolved.
As the parameters discussed above themselves vary with distance it is possible that α_B's apparent dependence on those parameters is merely a reflection of the parameters' and α_B's shared dependence on distance. In this paper a statistical analysis is presented, which, for the first time, rigorously separates the dependence of α_B on distance from α_B's dependence on other properties of the solar wind, in order to clearly identify which is controlling its behaviour and therefore the nature of the MHD inertial range in the solar wind.
§ DATA
PSP data from orbits 1 to 11 were used, covering the date range 1st October 2018 to 31st March 2022. The magnetic field data were provided by the fluxgate magnetometer (MAG) of the FIELDS instrument suite <cit.>, with the 4 samples per cycle data product being used throughout this paper. The ion velocity data were provided by the SPAN-I instrument of the SWEAP suite <cit.>, with bi-Maxwellian fits <cit.> used during encounters 2 to 7, where available, and moments being used otherwise. Fits data were only used where at least 3 ϕ bins were fitted to, in order to ensure that the proton core was sufficiently captured. Density data were obtained from the quasi-thermal noise (QTN) measurements made by the Radio Frequency Spectrometer Low Frequency Receiver <cit.>. Density data from SPAN-I were also used but only as a check on the quality of the velocity data, as described in Section 3.2.
§ RESULTS
§.§ Dependence of magnetic spectral index with distance
The magnetic field data were divided into intervals of six hour duration in order to study the dependence of the spectral index, α_B, on heliocentric distance, r. Only intervals where PSP was at a heliocentric distance of less than 150 R_⊙ were considered and any intervals with more than 1% of data points missing were excluded from the analysis. This left 1873 intervals. For each interval a fast Fourier transform was performed to produce a trace power spectral density. Invoking the Taylor hypothesis <cit.> allows such frequency spectra to be interpreted as wavenumber spectra. <cit.> found the Taylor hypothesis to be appropriate for analysis with PSP data, even when working with data from its closest approaches to the Sun. α_B was calculated for each interval in the spacecraft-frame frequency range 10^-2 Hz < f_sc < 10^-1 Hz, it was verified that this corresponds to the MHD inertial range for each interval used. All the analysis in this paper involving the magnetic spectral index was repeated with α_B calculated over a range of a fixed number of ion gyroradii (assuming the Taylor hypothesis), this was found to have no significant impact on the results.
The index, α_B, calculated for each interval is shown in Figure <ref> as a function of heliocentric distance, r. At large distances the results are consistent with a -5/3 scaling but are close to a -3/2 scaling at the closest distances to the Sun. The transition between the two values occurs at about 50 R_⊙. This result is in agreement with <cit.>, with the additional finding that the index appears to saturate near -3/2 as the Sun is approached.
The transition is further illustrated in Figure <ref>. A selection of trace power spectra from the intervals are shown, with the colour of the spectra indicating the heliocentric distance at which they were measured. The spectra have been smoothed by averaging over a sliding window of a factor of two. Consistent with the above discussion, the spectra measured closest to the Sun are clearly shallower than those at the greatest distances and are consistent, in their inertial range, with a -3/2 scaling indicated by the upper solid black line. The spectra measured at the furthest distances are consistent with a -5/3 scaling, indicated with the lower solid black line.
§.§ Dependence on cross helicity and residual energy
To investigate the mechanism behind the transition in the value of α_B, other parameters of the solar wind, plausibly underlying the transition, were also measured. In order to determine which, if any, of these parameters may be responsible for the transition, the variation of α_B with distance, r, was separated from its variation with these parameters.
Those considered include the normalised cross helicity, defined as
σ_c = ⟨δz^+2 - δz^-2⟩/⟨δz^+2 + δz^-2⟩,
and the normalised residual energy, defined as
σ_r = 2 ⟨δz^+2·δz^-2⟩/⟨δz^+2 + δz^-2⟩,
where the angular brackets represent averages taken over the interval.
The imbalance and alignment of the Elsasser fields, characterised by σ_c and σ_r respectively, are a factor in determining the magnitude of the non-linear term in Equation (<ref>), the governing equation of ideal MHD turbulence. This, and their known radial dependence <cit.>, make σ_c and σ_r clear candidates for a potential parameter underlying the α_B transition.
The data were divided into one hour intervals. Only intervals where PSP was at heliocentric distance of less than 80 R_⊙ were considered. Any interval with at least 1% of the magnetic field data, 10% of the ion velocity data or 80% of the density data missing was discarded. Intervals where the average SPAN-I measured density was less than 10% of the density measured from quasi-thermal noise were also discarded. This final condition, along with the condition on the heliocentric distances of the intervals, is to ensure the SPAN-I measurements are sufficiently capturing the velocity distribution of the solar wind, which is not always fully in the instrument's field of view <cit.>. After application of these conditions, 1894 intervals remained, of which 558 obtained their velocity data from bi-Maxwellian fits, the remainder from moments.
Both σ_c and σ_r were calculated in the inertial range. This was achieved by determining the perturbations to the Elsasser variables in Equations <ref> and <ref> using
δz^±(t) = z^±(t+τ) - z^±(t), with τ≈100 s, a duration that corresponds to the inertial range. z^±(t) were calculated using only the magnetic field and ion velocity components perpendicular to B_0, the mean magnetic field of each interval. Figure <ref>(a) and (b) show |σ_c| and |σ_r| as functions of r. The radial dependence of both quantities is immediately apparent with intervals of high imbalance and low residual energy being more frequent closer to the Sun. Note that when calculated with moments |σ_c| tended to be slightly lower than when calculated with the bi-Maxwellian fits. The apparent decrease in |σ_c| as the Sun is approached at the smallest r displayed, where only moments are available, is therefore possibly artificial.
For each interval α_B was calculated, as described in the previous section. The intervals were binned by absolute cross helicity or absolute residual energy and the mean index for each bin determined, with associated standard error. The results are shown in <ref>(c) and (d) for |σ_c| and |σ_r|, respectively. The clear trends of increasing index for increasing imbalance and decreasing index for increasing absolute residual energy are consistent with the trends of α_B, |σ_c| and |σ_r| with r. These strong trends make both |σ_c| and |σ_r| good candidates for the analysis of this paper.
To examine whether σ_c, say, may be behind the transition in α_B, the variation of α_B with r was separated from the variation of α_B with |σ_c|. In order to do this one of r or |σ_c| was held approximately constant and the response of α_B to varying the other under this constraint was observed. Take first isolating the variation of α_B with |σ_c| from its variation with r. The intervals were binned according to r and, within each bin, a linear fit was performed of α_B against |σ_c|. For each bin the gradient of the linear fit, γ, and associated 95% confidence interval from that fit are displayed in Figure <ref>(a), against the arithmetic centre of the heliocentric distance range of that bin. 12 of the confidence intervals do not contain zero and so have an associated γ statistically different from zero. This is strong evidence that, even when r is kept approximately constant, α_B continues to vary with σ_c.
To isolate the variation with r from the variation with σ_c a similar procedure was followed. In this case the intervals were binned by |σ_c| and, within each bin, a linear fit of α_B to r was performed. Again a gradient, γ, and associated 95% confidence interval were obtained, the results are shown in <ref>(b). In this case only 4 of the bins have an associated γ which is statistically different from zero and the sign of γ is inconsistent across bins. There is therefore little evidence of a trend with r remaining when |σ_c| is held approximately constant. Figures <ref>(a) and (b) therefore suggest cross helicity is a strong candidate for a parameter underlying the observed transition in α_B.
The above process was repeated with |σ_r| and r, the results are shown in Figures <ref>(c) and (d). The data points for bins containing fewer than 10 intervals are not shown and are excluded from analysis, as is the case throughout this paper. Figure <ref>(c) is analogous to <ref>(a) with the intervals binned by r to isolate the variation of α_B with |σ_r|. 6 of the bins have confidence intervals that do not contain zero. There is, therefore, weaker evidence that the trend of α_B with |σ_r| remains when r is held approximately constant compared to the |σ_c| case. Figure <ref>(d) is analogous to <ref>(b); the intervals are binned by |σ_r| to isolate the effect of varying r on the index. Only 5 of the bins have associated γ with confidence intervals that do not include zero and therefore there is some evidence that holding |σ_r| constant has removed the apparent trend with r. Overall, Figures <ref>(c) and (d) suggest some evidence in favour of residual energy as a candidate for a parameter underlying the observed transition but this evidence is weaker than that for the cross helicity.
It should be noted that σ_c and σ_r do not vary independently and so it is possible that an apparent trend with one is due to the trend with the other. The values of σ_r and σ_c for the intervals used are plotted against each other in Figure <ref>. From Equations (<ref>) and (<ref>) it is apparent that σ_c^2 + σ_r^2 ≤ 1. There is a tendency, observed in previous studies <cit.>, for the points to preferentially lie towards the edge of the circle this condition defines and cluster in the negative residual energy, positive cross helicity quadrant. Given this, it is important to separate the dependence of α_B on σ_c and on σ_r. The above analysis technique was therefore used with |σ_c| and |σ_r|, r no longer being considered. The results are shown in Figure <ref>. Figure <ref>(a) shows γ for a fit of α_B against |σ_c| for intervals binned by |σ_r|. For 14 of the bins the confidence interval does not include zero. This indicates that, even with |σ_r| held constant, there is still good evidence of statistically significant variation of α_B with |σ_c|. Figure <ref>(b) shows the reverse, with γ corresponding to a fit of α_B against |σ_r| for intervals binned by |σ_c|. In this case only two of the intervals have a corresponding γ with a confidence interval that does not include zero. When |σ_c| is held constant it appears the trend with |σ_r| vanishes. This suggests that the apparent trend with residual energy is simply a manifestation of the underlying trend with cross helicity.
§.§ Dependence on turbulence age
A parameter also considered was the turbulence age <cit.>, the approximate number of outer scale nonlinear times that have passed for a parcel of plasma during its journey from the Sun, as it is possible that the trend observed in α_B may be due to the turbulence evolving in time as it becomes fully developed.
Equation (<ref>) suggests a form for the nonlinear time of τ_nl∼λ/δ b, where λ is the scale of the fluctuation and δ b is in velocity units. Note that this form of τ_nl does not account for effects arising from alignment or imbalance of the Elsasser fields. For this paper λ was taken to be the correlation scale, measured as the time scale over which the correlation function,
C(τ)= ⟨δB(t+τ) ·δB(t) ⟩,
where δB(t) = B(t) - ⟨B⟩, decreases by a factor of e; which was then converted to a length scale using the Taylor hypothesis <cit.>. δ b was taken to be the square root of the value of the magnetic field second-order structure function,
S_2(τ) = ⟨ |B(t+τ) - B(t)|^2 ⟩,
at large scales, where it reaches a steady value, in velocity units <cit.>.
The resulting τ_nl for each of the 1894 intervals used in the previous section is shown in Figure <ref>(a). As the correlation scale increases with distance and the magnetic fluctuation amplitudes decrease, the outer scale nonlinear time is seen to increase with distance from the Sun.
If τ_nl is taken to be constant for a given plasma parcel over its journey from the Sun, then the turbulence age is estimated as A_t = T/τ_nl, where T is the travel time from the Sun. However, calculated this way, τ_nl was found to increase at such a rate with distance that A_t would decrease with distance, which clearly cannot be correct. The assumption that the nonlinear time is constant was therefore abandoned and the following integral instead considered,
A_t(t) = ∫_0^tdt'/τ_nl(t').
Taking the solar wind speed, V_sw, to be constant with distance and performing a change of variables gives,
A_t(r) = 1/V_sw∫_r_0^rdr'/τ_nl(r').
It was then assumed that τ_nl follows a power law, τ_nl∝ r^a. Figure <ref>(a) gives justification to this assumption, with τ_nl appearing reasonably well captured by a such a function. If τ_nl is measured at some distance r_m to be τ_nl,m=τ_nl(r=r_m) then τ_nl = (r/r_m)^aτ_nl,m. Performing the integral gives,
A_t(r) = 1/1-ar_m^a/V_swτ_nl,m[r^1-a-r_0^1-a].
The value of a was calculated to be 1.85 by performing a fit of τ_nl against distance as shown in Figure <ref>(a). This value was used for all intervals. For each interval r_m and τ_nl,m were taken to be the values as calculated for that interval, as was the case for V_sw. A value had to be set for r_0, 13R_⊙ was used for each interval — this being a value below all heliocentric distances of the intervals used. While setting a value too far from the Sun will result in a systematically underestimated A_t, note that what is important for this present analysis is the relative A_t between points, rather than the absolute A_t. The results are shown in <ref>(b), with a least squares fit demonstrating that A_t increases with distance. A_t≫ 1 for all intervals, consistent with <cit.>, which would suggest well developed turbulence, and so appears to undermine the suggestion that the turbulence age may be behind the transition, though, as stated above, the form of τ_nl,m used here does not take into account imbalance or alignment of the Elsasser fields.
The intervals were binned by the calculated A_t and the mean α_B for each bin determined with associated standard error, the results are shown in Figure <ref>(c). Unlike in the cases of the trend with r, σ_c or σ_r, there is no clear trend of α_B with A_t. Nevertheless, the analysis of the previous section was repeated to attempt to separate any dependence of α_B on A_t from the dependence on r. Analogous to the above analysis, the intervals were binned according to distance and γ was calculated for each, with associated 95% confidence intervals, and is shown in Figure <ref>(d). Only 5 bins have associated error bars do not include zero. From this, and Figure <ref>(c), it follows that the evidence for turbulence age being the parameter underlying the transition in the spectral index is far weaker than is the case for cross helicity.
§.§ Dependence on further parameters
Other parameters possibly underlying the variation in α_B were considered and the above statistical analysis repeated for each. The 1894 intervals of the previous two sections were used for each parameter examined.
<cit.> found α_B to depend on wind type, which the solar wind velocity is a common proxy for.
Further <cit.> reported a trend of increasing index with increasing velocity. The mean solar wind velocity, V_sw, against r for each interval is shown in Figure <ref>(a), clearly showing the acceleration of the solar wind from the Sun. Figure <ref>(b) shows the results of separating any dependence of α_B on V_sw from its apparent dependence on r, with the intervals being binned by r, and γ with associated confidence interval being determined for each. 5 bins have associated confidence intervals that do not include zero and the sign of γ is inconsistent across these bins. The evidence for V_sw as the underlying parameter is therefore weak.
A further parameter considered was the sampling angle — the angle between the mean magnetic field and mean solar wind velocity in the spacecraft frame. Solar wind turbulence is known to be anisotropic <cit.> meaning that different properties may be observed depending on the angle PSP's path makes with the background field, potentially explaining the observed index trend with distance. The measured angles, θ_BV, for each of the intervals used are shown against r in Figure <ref>(c). There is a clear trend with distance, with greater θ_BV values tending to be observed further from the Sun. A similar analysis to the above is shown in Figure <ref>(d), the intervals here again binned by r to isolate any trend with θ_BV. Only 3 of the bins have γ which are statistically different from zero and so there is little evidence for a trend with the sampling angle once the trend with distance is taken into account.
The magnitude of the magnetic field fluctuations, both unnormalised and normalised by the background field, were also considered. The latter is a factor in determining the turbulence strength and so may plausibly play a role in the transition of α_B. δ B was calculated as in Section 3.3 but was not converted to velocity units. The unnormalised values against distance are shown in Figure <ref>(e) and the normalised in Figure <ref>(g). While there is a very clear negative trend with distance in the unnormalised case there is no clear trend in the normalised case. Similar analysis to the above yields Figures <ref>(f) and <ref>(h) for the unnormalised and normalised case respectively. In the case of the unnormalised fluctuation magnitude only 3 bins have a γ statistically different from zero, in the case of the normalised fluctuation it is only 2, and so the evidence for either underlying the transition in α_B is weak.
§.§ The velocity field spectral index
To aid with the interpretation of the above results, that point to cross helicity as the underlying parameter behind the dependence of the magnetic field spectral index with distance, the dependence of the velocity field spectral index, α_v, on cross helicity was considered.
The lower cadence of the velocity measurements compared to the magnetic field measurements makes obtaining a good measure of α_v considerably more difficult than obtaining a good measure of α_B. SPAN-I moments were used, rather than fits, to measure α_v due to noise in the fits data at high frequencies. The data were divided into one hour intervals, only those with a resolution of at least 11 seconds were used. The selection criteria described in Section 3.2 were also applied. For each interval a fast Fourier transform was performed to produce a velocity power spectrum which was then smoothed by averaging over a sliding window of a factor of two. Many values for α_v were then obtained by calculating α_v over frequency ranges set by a sliding window, 0.2 f^* < f_sc < f^*, with f^* ranging from f_max, the maximum available frequency, down to 0.5 f_max. These ranges are selected as f_max is in the MHD inertial range for all intervals. The resulting set of indices were subject to a moving mean of a constant number of data points, with the variance associated with each mean recorded. The mean corresponding to the smallest variance was then selected as the final value for α_v for the interval, the process being designed to select a frequency range to measure α_v over which the value of α_v is as close to constant as possible.
The process was deemed to have performed sufficiently well when the minimum variance was below 10^-4. Discarding intervals where this was not the case left 757 intervals. An additional 12 intervals, for which unphysical values of α_v were calculated and which contained heliospheric current sheet crossings or where the velocity distribution was not well captured, were also discarded. The measured α_v against |σ_c| for the remaining intervals is shown in Figure <ref>, with σ_c determined as in previous sections. There is no evidence for a trend of the α_v with |σ_c|, with the running mean being consistent with -3/2 for all values of |σ_c|.
§ DISCUSSION
In this paper a transition in the magnetic field spectral index has been shown from -5/3 far from the Sun to -3/2 close to the Sun, with the transition occurring at around 50R_⊙. This is in agreement with previous observations <cit.>. A saturation of α_B at -3/2 as the Sun is approached is shown clearly for the first time. To gain insight into the physical mechanism responsible, the variation of the index with distance was separated from its variation from a number of other parameters plausibly responsible for the transition. Of all variables considered, the normalised cross helicity was found to be the only parameter to show a significant underlying effect on the spectral index. Previous work has found α_B to vary with σ_c <cit.>, this paper builds on those findings by rigorously isolating the variation with σ_c from variation with other parameters of the solar wind. This result contrasts with <cit.>, who argued that the residual energy is the main controlling parameter, and <cit.>, who argued for the turbulence age. However, the analysis presented here does not exclude the possibility of a secondary, weaker dependence on these parameters. There is no evidence for a similar trend for the velocity spectrum, which appears to be consistent with a -3/2 scaling regardless of the cross helicity. This is in agreement with observations at 1 au <cit.> and <cit.>, which found no trend of α_v with distance using PSP data. Some existing models of imbalanced turbulence do predict different behaviour for imbalanced compared to balanced turbulence <cit.> but none predict the results obtained.
It is possible that excess magnetic energy in some regions, represented by a negative residual energy, could manifest as current sheets and <cit.> found the presence of current sheets was associated with steeper magnetic spectra. This would be consistent with the found trend of the magnetic index on residual energy. In agreement with this potential connection <cit.> found discontinuities in the solar wind to be associated with steeper spectra. However, if such discontinuities were behind the transition in α_B it would be expected that the dependence of α_B on |σ_r| would be stronger than its dependence on |σ_c|, which is the opposite to what has been found here. The apparent tendency for the residual energy to be maximised for a given cross helicity (Figure <ref>) may provide a means by which the cross helicity could influence the index through this mechanism despite this.
An alternative explanation for the transition could lie in the potentially different behaviour of imbalanced, compared to balanced, turbulence. The imbalanced regions are, for example, where the theorised “helicity barrier" is thought to be active <cit.>. Under certain conditions a forward cascade of cross helicity meets a reverse cascade of magnetic helicity near the ion gyroscale, limiting the energy that can cascade forward for the dominant Elsasser field. The resulting buildup in energy at the gyroscale could result in a shallower spectrum, hence explaining the different observed scalings for different levels of imbalance. However, this would not account for why there is only a transition in the magnetic spectral index and not the velocity index, a challenge any potential explanation has to overcome.
The mechanism behind the observed behaviour of α_B and α_v remains an open question. The found strong dependence of the magnetic spectral index on the cross helicity perhaps points to an area where a new model of imbalanced MHD turbulence could be developed. Such a model would more fully capture the behaviour of the solar wind fluctuations than existing models and may better account for the impact of imbalance in MHD turbulence in general.
JRM is supported by STFC studentship grant ST/V506989/1. CHKC is supported by UKRI Future Leaders Fellowship MR/W007657/1. CHKC and AL are supported by STFC Consolidated Grants ST/T00018X/1 and ST/X000974/1. JRM and AL acknowledge support from the Perren Exchange Programme. We thank Lloyd Woodham for providing the SPAN fits dataset and Alexander Schekochihin for helpful discussions. PSP data are available at the SPDF (https://spdf.gsfc.nasa.gov).
aasjournal
|
http://arxiv.org/abs/2307.04101v1 | 20230709052851 | Enhancing Building Semantic Segmentation Accuracy with Super Resolution and Deep Learning: Investigating the Impact of Spatial Resolution on Various Datasets | [
"Zhiling Guo",
"Xiaodan Shi",
"Haoran Zhang",
"Dou Huang",
"Xiaoya Song",
"Jinyue Yan",
"Ryosuke Shibasaki"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
Enhancing Building Semantic Segmentation Accuracy with Super Resolution and Deep Learning: Investigating the Impact of Spatial Resolution on Various Datasets
Zhiling Guo^1,2,
Xiaodan Shi^2,
Haoran Zhang^2,
Dou Huang^2,
Xiaoya Song^3,
Jinyue Yan^1,
Ryosuke Shibasaki^2
^1Department of Building Environment and Energy Engineering,
The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
^2Center for Spatial Information Science, The University of Tokyo, Kashiwa, Japan
^3School of Architecture, Harbin Institute of Technology, Harbin, China
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================
empty
The development of remote sensing and deep learning techniques has enabled building semantic segmentation with high accuracy and efficiency. Despite their success in different tasks, the discussions on the impact of spatial resolution on deep learning based building semantic segmentation are quite inadequate, which makes choosing a higher cost-effective data source a big challenge. To address the issue mentioned above, in this study, we create remote sensing images among three study areas into multiple spatial resolutions by super-resolution and down-sampling. After that, two representative deep learning architectures: UNet and FPN, are selected for model training and testing. The experimental results obtained from three cities with two deep learning models indicate that the spatial resolution greatly influences building segmentation results, and with a better cost-effectiveness around 0.3m, which we believe will be an important insight for data selection and preparation.
§ INTRODUCTION
Buildings semantic segmentation via remote sensing imagery has become an important research topic in recent years <cit.>. With the rapid development of data acquisition systems and machine learning, the ever-expanding choices of datasets with very high resolution (VHR) <cit.> and deep learning methods <cit.> expand the opportunities for researchers to conduct more accurate analysis.
Although VHR imagery would express finer information contents of the landscape, it requires higher cost, longer processing time, and bigger storage space. Thus, the evaluation of the technical and economic trade-offs associated with using different resolution imagery is essential. The previous scholars have studied the impact of resolution in plant species <cit.>, land use <cit.>, and water <cit.> pattern recognition based on coarser-resolution or conventional machine learning methods. In this study, we investigate the impact of spatial resolution for building semantic segmentation via VHR imagery and deep learning methods, as shown in figure <ref>.
To compare the segmentation accuracy under different resolutions, we created remote sensing imagery in a specific area with resolutions from 0.075m to 2.4m by super-resolution (SR) <cit.> and down-sampling processing. The experimental results obtained from three different study areas via two deep learning models reveal that the finer the spatial resolution may not be the best in building semantic segmentation tasks, and the relatively low-cost imagery would be sufficient in many study cases. Thus, choosing a cost-effective spatial resolution for different scenarios is worth discussing.
The main contributions of this study can be highlighted as two folds. First, to the best of our knowledge, it is the first investigation for the impact of spatial-resolution on deep learning-based building semantic segmentation. Second, the resolution is not the higher the better for segmentation accuracy. According to our dataset, a resolution around 0.3m is better for cost-effectiveness, which enables researchers and developers to conduct their research efficiently.
§ DATA
We analyzed the impact of spatial resolution for building semantic segmentation over three representative study areas: Austin, Christchurch, and Tokyo. The original resolutions of the datasets mentioned above are about 0.075m, 0.150m, and 0.300m, respectively.
§ METHODS
The variation of spatial resolution will lead to differences in semantic segmentation results. At first, we resampled the imagery to a total of six pixel scales according to the spatial resolution range of most VHR images in data preprocessing, as shown in figure <ref>. After that, two representative semantic segmentation models are applied for building semantic segmentation. Finally, the comparison is conducted based on four assessment criteria.
§.§ Preprocessing
Compared with upscaling low-resolution imagery to HR space using a single filter such as bicubic interpolation, SR could increase the image resolution while providing finer spatial details than those captured by the original acquisition sensors. In this study, one of the typical deep learning SR models: ESPCN <cit.> is utilized to perform SR. In terms of the resample to lower-resolution, the pixel aggregate method is adopted. After that, six pixel scales in 0.075m, 0.150m, 0.300m, 0.600m, 1.200m, 2.400m can be generated.
§.§ Semantic Segmentation
As the representative deep learning models, in this study, we propose to adopt UNet <cit.> and FPN <cit.> to conduct the building semantic segmentation and investigate the impact of spatial Resolution in results. In general, Unet applies multiple skip connections between upper and downer layers, while FPN obtains features in bottom-up and top-down pathways. Both models have shown the high feasibility and robustness in many segmentation tasks. It should be noted, the data augmentation methods are adopted without random scaling in training, and a model trained by a specific area and resolution is applied to test the corresponding area and resolution for a fair comparison.
§ RESULTS AND DISCUSSIONS
After testing, we generated segmentation results in three cities with different resolutions by two deep learning architectures. Figure <ref> illustrates the impact of spatial resolution on deep learning-based building semantic segmentation, and the detailed quantitative results in IoU can be found in Table <ref>. It can be seen that resolution significantly influences the segmentation results, although images in some resolutions are generated by resampling methods. With the decrease of spatial resolution, in the beginning, the IoU increases slightly in Austin and is stable in both Christchurch and Tokyo. After a certain threshold of 0.300m, the IoU drops rapidly in all study areas. Importantly, both UNet and FPN show a similar tendency. This makes sense, as building features have specific physical size, and the spatial resolution is significantly finer than the certain threshold, which may not help the segmentation performance while providing redundant information. Therefore, the spatial resolution should reach a certain threshold to achieve decent accuracy, and the excessively pursue of finer resolution than the threshold is no need in many cases. Such a trade-off should be involved while selecting an appropriate data source. The experimental results obtained from three cities with two deep learning models demonstrate that the resolution is not the higher the better, and 0.3m resolution would be a better cost-effective choice for data selection and preparation in building semantic segmentation tasks.
§ CONCLUSION
In this study, we have investigated the impact of spatial resolution on deep learning-based building semantic segmentation and demonstrated the effectiveness of super resolution techniques in enhancing segmentation accuracy. Our results suggest that spatial resolution plays a critical role in the accuracy and generalization capability of deep learning models for building semantic segmentation, and that super resolution techniques can help to overcome the limitations of low-resolution data.
To further advance this line of research, future work could extend our empirical evaluation to other deep learning models, study areas, and data sources.
§ ACKNOWLEDGEMENT
We are grateful for the support and funding provided by the JSPS 21K14261 grant.
ieee_fullname
|
http://arxiv.org/abs/2307.03873v1 | 20230708012434 | Why does dissolving salt in water decrease its dielectric permittivity | [
"Chunyi Zhang",
"Shuwen Yue",
"Athanassios Z. Panagiotopoulos",
"Michael L. Klein",
"Xifan Wu"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.dis-nn",
"physics.chem-ph"
] |
Department of Physics, Temple University, Philadelphia, Pennsylvania 19122, USA
Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey 08544, USA
Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey 08544, USA
[email protected]
Department of Physics, Temple University, Philadelphia, Pennsylvania 19122, USA
Institute for Computational Molecular Science, Temple University, Philadelphia, Pennsylvania 19122, USA
Department of Chemistry, Temple University, Philadelphia, Pennsylvania 19122, USA
[email protected]
Department of Physics, Temple University, Philadelphia, Pennsylvania 19122, USA
Institute for Computational Molecular Science, Temple University, Philadelphia, Pennsylvania 19122, USA
The dielectric permittivity of salt water decreases on dissolving more salt. For nearly a century, this phenomenon has been explained by invoking saturation in the dielectric response of the solvent water molecules. Herein, we employ an advanced deep neural network (DNN), built using data from density functional theory, to study the dielectric permittivity of sodium chloride solutions. Notably, the decrease in the dielectric permittivity as a function of concentration, computed using the DNN approach, agrees well with experiments. Detailed analysis of the computations reveals that the dominant effect, caused by the intrusion of ionic hydration shells into the solvent hydrogen-bond network, is the disruption of dipolar correlations among water molecules. Accordingly, the observed decrease in the dielectric permittivity is mostly due to increasing suppression of the collective response of solvent waters.
Why does dissolving salt in water decrease its dielectric permittivity
Xifan Wu
======================================================================
In chemistry and biology, water is widely referred to as the universal solvent <cit.>. As salts dissolve in water, the anomalously large dielectric permittivity of water promotes the solubilization of salt by screening interionic Coulomb interactions. At the same time, the dielectric response of water is influenced by the presence of dissolved salts <cit.>. Almost 100 years ago, it was found that the static dielectric permittivity of sodium chloride (NaCl) solution decreases as more salt is dissolved <cit.>. Later, more sophisticated experiments revealed a nonlinear behavior in which dielectric decrement slows down at high solute concentrations <cit.>. A theoretical explanation of this phenomenon was conceived soon after the first experiment. As stated in their dielectric saturation theory, Debye <cit.> and Sack <cit.> envisioned the formation of hydration shells due to the tendency of water dipoles to be aligned along electric fields of dissociated ions. Debye further estimated that ionic electric fields are strong enough to saturate the polarizability of water molecules near the ions and therefore lower the dielectric response <cit.>. Because of its built-in physical intuition, dielectric saturation has been, to date, the most adopted theory to explain dielectric decrement in salt water <cit.>.
The past half-century has witnessed significant progress in understanding water through principles of quantum mechanics and statistical physics <cit.>. This progress calls into question the dielectric saturation explanation. Indeed, consensus has been reached that the high dielectric permittivity of water is closely associated with correlated dipole fluctuations of water molecules on the underlying hydrogen(H)-bond network <cit.>. However, this collective dipolar response is missing in the picture of dielectric saturation which mainly concerns the suppressed dielectric response of individual water molecules <cit.>. More disturbingly, based on classical electrodynamics, dielectric saturation is estimated to occur on water molecules that are a few angstroms away from ions <cit.>. The above length scale is comparable to the estimated de Broglie wavelength of electrons at room temperature <cit.>. Physical interactions at such length scales are governed by quantum mechanics rather than a classical description. In this regard, density functional theory (DFT)-based <cit.> ab initio molecular dynamics (AIMD) <cit.> provides an ideal framework to predict properties of liquids from quantum mechanical principles. Indeed, recent AIMD simulations found that polarizabilities of water molecules in ionic first hydration shells are only slightly different from that in neat water <cit.>, which contradicts the dielectric saturation hypothesis.
Due to the long-range nature of the dipole-dipole interaction and the disordered liquid structure, the prediction of dielectric response in water demands both a spatially extensive model containing many hundreds of water molecules and a simulation time beyond nanoseconds <cit.>. However, AIMD simulations of such large timescale and system size are simply not feasible using current computer architectures. Thus, to date, dielectric decrement has been mostly studied using molecular dynamics with classical force fields, and the effect of electronic polarizability has been neglected <cit.>.
Herein, we overcome the challenge by studying dielectric decrement by combining AIMD and deep neural networks (DNNs) <cit.>. The liquid structures of NaCl solutions are simulated by a DNN that explicitly incorporates long-range electrostatic interactions <cit.> with periodic simulation cells containing about 4000 water molecules. Importantly, the potential is trained on DFT calculations based on the strongly constrained appropriately normed (SCAN) functional <cit.>. In addition, a second DNN <cit.> is trained separately for centers of electronic orbitals, in terms of maximally localized Wannier functions <cit.>. Notably, this second DNN allows us to rigorously partition the electronic charge density into contributions from dipole moments of individual water molecules. The dual DNNs enable efficient computations of dielectric permittivity at the DFT accuracy. (See Supplemental Material <cit.> for more details on this methodology.)
Based on linear response theory, the static dielectric permittivity of NaCl solutions, ε_NaCl(aq), is related to the fluctuation of the total dipole moment, M, by <cit.>
ε_NaCl(aq) =⟨M^2⟩/3 V k_B T ε_0+ε_∞
=⟨(M_W(aq)+M_I(aq))^2⟩/3 V k_B T ε_0+ε_∞
=ε_W(aq)+ε_W(aq)-I(aq)+ε_I(aq)+ε_∞
where V, k_B, T, and ε_0 are the system volume, Boltzmann constant, temperature, and vacuum permittivity, respectively. ε_∞ is the electronic contribution in the high-frequency limit. As expected, the theoretical ε_∞ are small values around 1.88-1.99 at concentrations under consideration. We report the computed dielectric permittivity of NaCl solutions in Fig. <ref> together with experimental data. Note that both results have been normalized to enable a better comparison of dielectric decrement behavior. There is good agreement between experiments and present calculations. In particular, the nonlinear behavior in dielectric decrement observed in experiment is well reproduced. The dielectric permittivity drops steeply at low concentrations, but its slope becomes gradually flattened as solute concentration increases. Notably, the nonlinearity generates a bowing feature in dielectric decrement. Absolute values of the computed dielectric permittivity are reported in Supplemental Material Table 1 <cit.>. It should be noted that the predicted dielectric permittivity of neat water by SCAN functional is 102.5, which is larger than the experimental value of 78. The overestimation of the dielectric permittivity is consistent with a previous study employing the SCAN functional <cit.>, and this overestimation is particularly attributed to the self-interaction error in the SCAN functional that over-strengthens H-bonds. The slightly overstructured liquid water has been widely reported in literature <cit.> and its effects on observables can be approximated by the effects of decreasing the temperature, which does not affect our conclusions.
In NaCl solutions, the fluctuation of the overall dipole moment, M, involves contributions from both water molecules, M_W(aq), and ions, M_I(aq). Therefore, the dielectric permittivity, ε_NaCl(aq) in Eq. <ref> is composed of the self-terms, ε_W(aq) and ε_I(aq) whose dipole fluctuations are restricted to water molecules and solvated ions only, and the cross-coupling term ε_W(aq)-I(aq) reflecting dipole fluctuations in water induced by the movements of ions or vice versa. The computed values of above terms are presented in the inset of Fig. <ref>. Notably, ε_NaCl(aq) is dominated by ε_W(aq) at all concentrations, which agrees with previous findings <cit.>. Thus, dielectric decrement observed in NaCl solutions is due to the weakened dielectric response of solvent water molecules.
The dielectric component ε_W(aq) due to solvent water can be further evaluated via the dipolar correlation formalism proposed by Kirkwood <cit.> as
ε_W(aq)=ρμ^2 G_K/3 k_B T ε_0,
where ρ and μ denote water number density and average dipole moment per water molecule respectively, and G_K is the so-called correlation factor that measures the total angular correlations among water dipoles. In polar liquids, G_K is obtained by the integration of the dipolar correlation function as G_K=∫𝒞(r)dr=1/N∑_i=1^N∑_j=1^N μ̂_i·μ̂_j, where μ̂_i is the unit vector of the ith molecular dipole and N is the number of water molecules. The dipolar correlation is defined as 𝒞(r)=⟨d(0)·d(r)⟩, accounting for the spatial correlation between the dipolar density as a function of distance, r. Because of the discretized nature of water molecules, the dipolar density is defined as d(r)=∑_i=1^N μ̂_i δ(r-r_i) with r_i denoting the position vector of the ith water molecule.
In neat water, both the dipole moment, μ, and the correlation factor, G_K, are largely enhanced by the underlying H-bond network, leading to the anomalously large dielectric permittivity <cit.>. In NaCl solutions, as shown in Fig. <ref> (a), the correlation factor, G_K, the water number density, ρ, and the water dipole moment, μ, all decrease as increasing amounts of salt dissolved, which according to Eq. <ref> leads to dielectric decrement.
The effect from the disrupted H-bond network
As seen in Fig. <ref> (a), dielectric decrement of NaCl solutions is mostly attributed to the decreased correlation factor, G_K, relative to that of neat water. Thus, the strong correlation among dipole moments in neat water is significantly suppressed in salt solutions. In neat water, the large G_K is closely associated with the tetrahedral H-bond structure, in which a water molecule at the center of a tetrahedron is H-bonded with four neighboring water molecules. The directions of dipole moments of any two H-bonded water molecules, therefore, point in a similar direction, resulting in a positive μ̂_i·μ̂_j, which gives rise to the first positive sharp peak at 2.7 Å in the dipolar correlation function in Fig. <ref> (b). Under the influence of the directional H-bonding, dipole moments on vertices of a tetrahedron also prefer to be aligned in a similar direction to some extent, which yields a second positive peak around 5.1 Å in Fig. <ref> (b). In the same fashion, the dipolar correlation propagates to the third coordination shell and beyond.
The H-bond network is disrupted increasingly as more salt is dissolved. Salt ions exert electrostatic fields that can attract water molecules by competing with the H-bonding. In the close vicinity of ions, water molecules hydrate the ions by orienting their electric dipole moments towards the ions, thereby lowering the electrostatic energy of the system, as schematically shown in Fig. <ref> (b). For a sodium cation, the first hydration shell can be described as a relatively tight sphere comprised of about 5 or 6 water molecules, whose oxygen is attractive to the cation at the center <cit.>. On the other hand, the first hydration shell of a chloride ion is a relatively large sphere composed of as many as 6-8 water molecules whose protons are attracted to the chloride lone pair electrons <cit.>.
Because of the intrusion of the hydration shells, water molecules in the solvent are now divided into two distinct categories: the “hydration (H) water” inside the ionic hydration shells and the “bulk (B) water” outside it. As such, the pattern of dipolar correlation is fundamentally revised. As shown in Eq. <ref>,
G_K =∫[𝒞^B(r)+𝒞^H(r)+𝒞^BH(r)]dr
=G_K^B+G_K^H+G_K^BH,
the total correlation factor G_K involves the self-terms of G_K^B (G_K^H) by dipolar correlation restricted to “bulk water” (“hydration water”) only, and the coupling term G_K^BH due to the dipolar correlation between “bulk water” and “hydration water”. The above components in correlation factors, relative to neat water, are presented in Fig. <ref> (a). (See: Supplemental Material <cit.> for more details.)
As seen in Fig. <ref> (a), the reduction in the overall correlation factor, G_K, is mostly from G_K^H, which describes the correlation among “hydration water”. This is because water molecules in hydration shells are constricted by the ion-water attraction instead of H-bonding. Within a hydration shell, the cation (anion)-water attraction reorientates the dipole moments from an H-bonding direction to a central-force direction pointing outwards (towards) ions. As such, the dipolar correlation between two neighboring “hydration water” molecules is thereby significantly suppressed. This is evidenced by the sharp negative peak at ∼ 2.7 Å in the dipolar correlation function Δ𝒞^H(r) as plotted relative to neat water in Fig. <ref> (c). Moreover, the absence of H-bonding even causes anti-correlations between two “hydration water” molecules located on the opposite sides of a single ion as schematically shown by opposite directions of water molecular dipoles in the inset of Fig. <ref> (c). Therefore, the aforementioned positive peak of neat water in Fig. <ref> (b) due to correlated dipole moments on vertices of a tetrahedron at 5.1 Å disappears. Instead, it is replaced by two negative peaks at 4.8 and 6.1 Å, which are caused by the anti-correlated water dipoles in hydration shells of Na^+ and Cl^- ions, respectively. At long range, water molecules in a hydration shell, in principle, should be correlated to those in another hydration shell. However, such correlations are also weaker than those in neat water as expected in Fig. <ref> (c). As concentration increases, the loss of G_K^H should accumulate linearly, which is responsible for most of the linear dielectric decrement in salt water.
Of course, “hydration water” is H-bonded to “bulk water”, and in this way, the H-bond network is partially restored. Nevertheless, the reconstructed H-bond structure deviates from that found in neat water. Within a hydration shell, two water molecules located on opposite sides of a single ion are anti-correlated, as mentioned above. Because of the highly directional nature of H-bonding, the anti-correlation extends to the correlation between one “hydration water” molecule and one “bulk water” molecule that is H-bonded to another “hydration water” molecule at the other side of the ion, as schematically shown by the opposite direction of green arrows in the inset of Fig. <ref> (d). Again, these anticorrelations can be identified as a broad negative peak centered at 8 Å, which weakens the dipolar correlation. As a result, G_K^BH also contributes to the decreased overall correlation factor of G_K relative to neat water, as shown in Fig. <ref> (a). Moreover, G_K^BH plays a surprisingly key role in the nonlinear dielectric decrement as evidenced by its arc shape in Fig. <ref> (a). This nonlinearity is an intrinsic property because G_K^BH describes the correlation between the dipolar density of “bulk water” d^B(r) and the dipolar density of “hydration water” d^H(r), and its value depends on the existence of both types of water, i.e., ⟨d^B(0)·d^H(r)⟩. In neat water, G_K^BH=0 since the dipolar density of “hydration water” d^H(r) is zero. As salt dissolve in water, hydration shells appear in the solution, and the absolute value of G_K^BH starts to increase, reaching its maximum at about 2.3 M, in which the NaCl solution is roughly equally occupied by “bulk water” and “hydration water”. After the maximum, G_K^BH decreases with further elevated concentrations. In principle, it will vanish again at d^B(r)=0, when the entire solution is completely occupied by hydration shells.
The tetrahedral H-bond network is expected to recover in the “bulk water” outside the hydration shell. The dipolar correlation among “bulk water” molecules is captured by the G_K^B component of the correlation factor. Indeed, the analysis in Fig. <ref> (a) shows that G_K^B of NaCl solutions at all concentrations is little different from neat water. Thus, the large decrease in the correlation factor, G_K, in salt water is mostly due to the disrupted H-bond network in the “hydration water”.
Excluded volume effect
Due to short-range repulsion, ions and water molecules are separated by 2-4 Å. This extra volume demanded by ions is no longer accessible to water molecules, and the water number density is therefore decreased. In the literature, this is referred to as the excluded volume effect <cit.>. According to Eq. <ref>, this effect should lead to the decreased dielectric permittivity. Indeed, the present computations show that the excluded volume effect makes a small contribution to dielectric decrement, in which the water number density decreases slightly with increasing solute concentration as shown in Fig. <ref> (a). Since the repelled volume by ions is proportional to the salt concentration, dielectric decrement due to the excluded volume effect is indeed linear as expected.
Local field effect
Hydrated ions, like all charged defects, change the electrostatic potential profile throughout the solution. As expected, water molecules nearby an ion are polarized in a different manner from neat water. In condensed matter physics, related phenomena have been already identified, for example around defects in semiconductors or at interfaces in solid materials, and they have long been recognized as the local field effect <cit.>. There is consensus that a proper description of local field effects, particularly for regions close to charged defects, demands electronic structure details computed from quantum mechanics. Based on DFT, the present DNN simulations yield a dipole moment, μ=2.85 (2.91) Debye for the “hydration water” of the cation (anion), which is only slightly smaller than the value of 2.99 Debye in neat water. This suggests that the capability of ions to polarize the water dipole is comparable to that of H-bonding. Indeed, it is also consistent with the recent theoretical discovery that molecular polarizabilities of the “hydration water” are only marginally different from that in neat water <cit.>. Since H-bonding is mostly electrostatic in nature, it strongly indicates that water molecules nearby ions are far from being saturated by ions’ local fields. Nevertheless, the local field effect also contributes slightly to dielectric decrement as indicated by Eq. <ref>. Because the μ of the “hydration water” is only a little smaller than in neat water, μ^2 of NaCl solutions drops slowly as a function of concentration, as shown in Fig. <ref> (a).
In addition to the SCAN ab initio simulations, we also simulated the dielectric permittivity using the classical OPC water model <cit.>. As shown in Supplemental Material <cit.>, the results obtained using the OPC model agree well with those from the SCAN-DFT approach. A notable distinction between the OPC model and the SCAN-DFT model is that the OPC model is a rigid model with a fixed dipole moment of 2.48 D, indicating that the DFT approach is necessary for accurately capturing the local field effect.
In conclusion, dielectric decrement, as a century-old problem, has been extensively studied over decades.
However, a critical question remains unresolved in the field regarding the main origin behind the dielectric decrement—whether it is the dielectric saturation effect <cit.> or the loss of dipolar correlation on the H-bond network <cit.>.
To provide an unambiguous answer, theoretical simulations must explicitly include both a polarizable model of water molecules and an accurate model of H-bonding, which can account for the dielectric saturation effect and correlation effect simultaneously. Importantly, the polarizable models of water molecules should be described from first principles at the quantum mechanics level, because the length scale of dielectric saturation effect is about a few angstroms which is comparable to the de Broglie wavelength of electrons at room temperature. In this work, we achieve the above goal by reproducing dielectric decrement in NaCl solutions on the DFT level using advanced DNNs. The results unambiguously determine that the dielectric decrement in NaCl solutions is dominated by the loss of correlations between water molecules due to the intrusion of ionic hydration shells into the H-bond network, while the contribution from dielectric saturation effect is small.
Importantly, the present computations provide a quantitative explanation of dielectric decrement in salt water; we found that the linear dielectric decrement is due to the loss of correlation within hydration shells, while nonlinear dielectric decrement is due to the loss of correlation between water in hydration shells and bulk water.
We thank Roberto Car, Linfeng Zhang, and Han Wang for fruitful discussions. This work was supported by National Science Foundation through Awards No. DMR-2053195. We also acknowledge support from the “Chemistry in Solution and at Interfaces” (CSI) Center funded by the U.S. Department of Energy through Award No. DE-SC0019394. This research used resources of the National Energy Research Scientific Computing Center (NERSC), which is supported by the U.S. Department of Energy (DOE), Office of Science under Contract No. DE-AC02-05CH11231. This research includes calculations carried out on HPC resources supported in part by the National Science Foundation through major research instrumentation grant number 1625061 and by the U.S. Army Research Laboratory under contract No. W911NF-16-2-0189. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
|
http://arxiv.org/abs/2307.07420v1 | 20230711164816 | Named entity recognition using GPT for identifying comparable companies | [
"Eurico Covas"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.NE"
] |
Phenomenology of DSR-relativistic in-vacuo dispersion in FLRW spacetime
Suzana Bedic
August 12, 2023
========================================================================
For both public and private firms, comparable companies analysis is widely used as a method for company valuation. In particular, the method is of great value for valuation of private equity companies. The several approaches to the comparable companies method usually rely on a qualitative approach to identifying similar peer companies, which tends to use established industry classification schemes and/or analyst intuition and knowledge. However, more quantitative methods have started being used in the literature and in the private equity industry, in particular, machine learning clustering, and natural language processing (NLP). For NLP methods, the process consists of extracting product entities from e.g., the company's website or company descriptions from some financial database system and then to perform similarity analysis. Here, using companies descriptions/summaries from publicly available companies' Wikipedia websites, we show that using large language models (LLMs), such as GPT from openaAI, has a much higher precision and success rate than using the standard named entity recognition (NER) which uses manual annotation. We demonstrate quantitatively a higher precision rate, and show that, qualitatively, it can be used to create appropriate comparable companies peer groups which can then be used for equity valuation.
§ INTRODUCTION
Company valuation is the process of attributing a value to a public or private company[Public companies' stocks are listed on an exchange, allowing for more transparency and scrutiny, while private companies' stocks are private held and a common realizable price for the shares is not easily or readily obtainable.] in a certain currency and valuation date. For public companies, one could use e.g., the exchange share price multiplied by the total number of shares as a valuation[Given that share prices fluctuate short term in a stochastic fashion, one may want to take some averaging across time, as it is standard in the financial industry <cit.>.], but often there is a speculative influence on the share price, making the stock price, at short time scales, more of a stochastic rather than a deterministic variable <cit.>. For private companies, one does not even have a publicly available share price, so several methods have been proposed for valuation <cit.>. Nonetheless, all these quantitative methods can be used for any firm, regardless of being public or private. Three of the most commonly used methods <cit.> are respectively, the comparable companies or financial multiples / relative valuation method; the comparable transactions method; and the discounted cash-flow (DCF) method. These methods are used to value a company, rather than its stock, as these are mostly used for long term investment, or mergers and acquisitions (M&A), rather than short-term trading or arbitrage.
The first method, comparable companies or relative valuation <cit.>, is based on the ratios of one or some of the financial variables of the selected company to value against the ones from a peer group of public companies. These ratios are therefore dimensionless. As we can obtain the share value of the public companies from the stock market value, we have, therefore, the enterprise value of those companies, and through some kind of averaging (including possibly some statistical removal of outliers), we can obtain a peer or similar companies group ratio. Once the peer group ratios are obtained, one can use the financial statements of the selected company to do an inversion and obtain an enterprise value estimate for the company. The second method, comparable transactions <cit.>, analyses similar companies that have transacted recently, either an acquisition, or a merger (M&A), or an initial public offer (IPO), or a seasoned equity offerings (SEO) or similar. Then, the same approach as the relative valuation is used, using available ratios based on the valuation related to the transaction. The third method, the discounted cash-flow method or DCF for short <cit.>, uses the future projected cash-flows of the company, discounted to today, i.e., multiplied by a factor or factors to take into account the future value of money.
There are many other methods and quite a lot of variations on above methods, but these are, according to some surveys <cit.>, some of the most used approaches. Most of the equity valuation research (both public and private) is, however, done quite manually, whereby the equity researcher or financial analyst uses some data source of financial data (e.g., Bloomberg, Factset, Reuters - now called Refinitiv, S&P, UK Companies House, U.S. Securities and Exchange Commission) and a spreadsheet model[Sometimes the spreadsheet, usually in Excel, are augmented with some Visual Basic – VBA code.] to value the selected company. This can lead to errors, biases, lack of traceability and reproducibility, etc. This is quite costly, and recently, there has been some advances in the automation of this process, done by several fintech software houses, e.g., Valutico or Equity-X, in order to solve the problem of consistency and process industrialization.
For the first two valuation methods described above, the comparable companies and comparable transactions approaches, it is essential to define what we mean by similar or peer group companies.
There are many other reasons to want to be able to identify comparable companies, e.g., for M&A <cit.>; for the owners to establishing the competitive landscape; for economic research into business networks; among others.
There are also many ways to identify the company's peer group, some approaches being qualitative, some being quantitative.
A particularly used method is to choose companies within the same sector by using a standard industry classification such as the
Standard Industrial Classification (SIC) system, the North American Industry Classification System (NAICS) and the
Global Industry Classifications Standard (GICS) system <cit.>.
However as far back as <cit.> it has been noted that the industry grouping only contribute a small percentage of stock returns variance. Other studies <cit.> have tried to show that most of these industry classifications (with maybe the exception of the GICS system), do not explain most of the cross-sectional share movement within a industry or sub-industry code.
<cit.> found that research analysts or equity valuation experts tend to choose peer companies based on the size, asset turnover, the industry classification, and trading volume, among other less relevant variables.
Some quantitative approaches for peer selection, based on “big data” have been proposed <cit.>, whereby companies' peer groups are build based on top co-searches by financial analysts, using databases such as USA Electronic Data Gathering, Analysis and Retrieval system (EDGAR), part of the Security Exchange Commission (SEC), used by, e.g., sell-side stock analysts. The results seem to show that the so-called “wisdom of the crowd” approaches for peer construction can perform well. The results by <cit.> seem to support this, that co-searches by analysts help to define more homogeneous peer groups than industry classifications. Some <cit.> even used natural language processing (NLP) to extract data from social media (Twitter) to define a company network, i.e., peer groups that more readily explain stock movement than industry classification groups.
<cit.> and <cit.> attempted to show that using the similarity of fundamentals to defined peer groups can increase the precision of the valuation and therefore provide more meaningful company groupings. <cit.> found that while industry classification codes, such as SIC codes, where not of much use by investment bankers when choosing peers, and that the products or services sold by the companies were of much more interest when devising peer groups.
Recently, some <cit.> have even used advanced large language models (LLMs), a form of NLP, to reveal a company similarity network across most of the world's public companies, and seem to have shown that using such a company similarity network can be used to create peer groups that when traded would lead to a significant higher return that using analysts co-searches, or fundamentals clustering, or industry classification.
§ RESULTS
Following on the above approaches to finding similar companies based on products <cit.> and company descriptions <cit.>, and recognizing that we need a method, e.g., named entity recognition (NER), that can create peer groups in a consistent way with good performance, we decided to study in this paper the efficiency of the so-called Chat Generative Pre-Trained Transformer, commonly known as GPT, in this case the GPT 3.5 (model gpt-3.5-turbo) LLM model <cit.> from openAI against a NER base model using spaCy <cit.>. We researched the ability of both these models in extracting the products and services companies provide based on the publicly available description of those firms in their respective Wikipedia websites. While LLM models create a statistical probability distribution function of words showing up consecutively, and is based on unsupervised learning[Unsupervised learning are statistical methods that create a model from data that is not labelled by humans. This is in contrast with supervised learning, where data is labelled, e.g., a set of images together with the label, the description, or to give an example within the subject this paper, one could have the data as the fundamentals of the company, and as the label as the value of the company, e.g., the dividend yield or the future growth rate <cit.>.], standard NER models create a neural network, mapping unstructured text to a set of entities based on supervised learning. Standard neural network systems such as spaCy has been, for a while, the industry default approach for NER, however, recently the LLM models have started showing quite a considerable improvement and have become the subject of a social media hype.
We first identify 13 publicly listed companies that have a Wikipedia page, and extract the summary part of the Wikipedia page as pure text. The data set is described below in table <ref>.
We have kept the above Wikipedia data set download frozen as of 7 June 2023, for consistency and to be able to reproduce results. For compatibility with the use of the GPT 3.5 template
we also remove any page breaks or newlines, and normalize all non-English accentuation, so e.g., “…Nescafé…” becomes “…Nescafe…”. We also simply removed any non-ASCII codes[ASCII stands for American Standard Code for Information Interchange. We used just the basic 95 printable characters.]. A typical example of the company summary is copied below for “Apple Inc.”, including the products and services which were annotated[The process of annotation, within the context of NLP, is to assign a type or entity class to a word or small sequence of words, e.g., to assign the entity of place, date, currency to words within a text. Note that we use the same entity – PRODUCT – for both product and services, for simplicity.] by hand by us, and we have marked those by bold font for clarity[For an example of another annotated corpus that can be used for product extraction see e.g.,<cit.>.].
Apple Inc. is an American multinational technology company headquartered in Cupertino, California. Apple is the world's largest technology company by revenue, with US$394.3 billion in 2022 revenue. As of March 2023, Apple is the world's biggest company by market capitalization. As of June 2022, Apple is the fourth-largest personal computer vendor by unit sales and the second-largest mobile phone manufacturer in the world. It is one of the Big Five American information technology companies, alongside Alphabet, Amazon, Meta Platforms, and Microsoft. Apple was founded as Apple Computer Company on April 1, 1976, by Steve Wozniak, Steve Jobs and Ronald Wayne to develop and sell Wozniak's Apple I personal computer. It was incorporated by Jobs and Wozniak as Apple Computer, Inc. in 1977. The company's second computer, the Apple II, became a best seller and one of the first mass-produced microcomputers. Apple went public in 1980 to instant financial success. The company developed computers featuring innovative graphical user interfaces, including the 1984 original Macintosh, announced that year in a critically acclaimed advertisement called 1984. By 1985, the high cost of its products, and power struggles between executives, caused problems. Wozniak stepped back from Apple and pursued other ventures, while Jobs resigned and founded NeXT, taking some Apple employees with him. As the market for personal computers expanded and evolved throughout the 1990s, Apple lost considerable market share to the lower-priced duopoly of the Microsoft Windows operating system on Intel-powered PC clones (also known as Wintel). In 1997, weeks away from bankruptcy, the company bought NeXT to resolve Apple's unsuccessful operating system strategy and entice Jobs back to the company. Over the next decade, Jobs guided Apple back to profitability through a number of tactics including introducing the iMac, iPod, iPhone and iPhone to critical acclaim, launching the Think different campaign and other memorable advertising campaigns, opening the Apple Store retail chain, and acquiring numerous companies to broaden the company's product portfolio. When Jobs resigned in 2011 for health reasons, and died two months later, he was succeeded as CEO by Tim Cook. Apple became the first publicly traded U.S. company to be valued at over $1 trillion in August 2018, then at $2 trillion in August 2020, and at $3 trillion in January 2022. As of April 2023, it was valued at around $2.6 trillion. The company receives criticism regarding the labor practices of its contractors, its environmental practices, and its business ethics, including anti-competitive practices and materials sourcing. Nevertheless, the company has a large following and enjoys a high level of brand loyalty. It has also been consistently ranked as one of the world's most valuable brands.[From https://en.wikipedia.org/wiki/Apple_Inc.Apple Wikipedia page, extracted and cached on 7 June 2023. Any spelling or grammar errors were not corrected, neither any attempt was made to make the English (American versus British spelling) consistent, i.e., the texts were used as sourced from Wikipedia.]
The annotation process is, obviously, quite subjective, it depends on what one considers to be products or services[There are some publicly available database of product/services such as the Nice Agreement lists <cit.>
and the United Nations Standard Products and Services Code (UNSPSC) <cit.>, but given the small amount of examples to annotate, we decided to just do it by hand.]. However, we tried to be as consistent as possibly within the data set, assigning both singular and plural versions of words, such as the above “computer” and “computers” entities.
For the base standard NER cases, we have annotated the company summaries from Wikipedia and used spaCy models[We used python spaCy version 3.5.3, the latest as of 7 June 2023, and we have noted and verified that although different version of spaCy and its trained models would give slightly different results, the qualitative conclusions would remain the same. We have used as a template for our spaCy python code some of the approaches in <cit.>.] with 100 learning steps[And mini batch size = 1.], no drop-out[Drop-out is used in machine learning to avoid the neural network over-fitting the training data, and is implemented by randomly omitting certain neurons at each learning step.] and the neural network optimizer algorithm being the Adam Optimizer <cit.> as it is the default setting in spaCy. We have used the empty English model 'en', and the built-in spaCy 'sm' (small), 'md' (medium) and 'lg' (large) models. These three latter models we have implemented with transfer learning <cit.>, that is, we have taken already trained standard models and fine-tuned with extra examples, which is called transfer learning <cit.>. For the GPT model, we used openAI's GPT 3.5 (gpt-3.5-turbo model as of 7 June 2023), which has been trained on web data with a cut-off of September 2021.
Named entity recognition with GPT does not work the same way as in spaCy, since the former is a large language model (LLM), that is, a very large neural network model trained on extremely large set of unstructured texts, in non-supervised way, and in simplified terms, able to predict the next word(s) in a conversation from a prompt or template, while the latter uses supervised learning with a training set that has been labelled or annotated, that is, NER in spaCy is tuned using texts where one tells the neural network what are the types or entities and what words or small sequences of words have those types or entities. Therefore, to further “train” GPT 3.5 for our NER product extraction task one has to introduce a text format template, so that GPT could recognize that there was a specific format to the input training cases and a specific output format too[We adapted our GPT text input/output template from the template in the online article published in <cit.>.]. The format of the template can be anything one wants, as long as it is very clear where the examples are – so there has to be a clear separation, and a clear fixed format for the product/service list. Below we show an example of this GPT template, using “Apple Inc.” as the sole training example:
Entity Definition:
1. PRODUCT: Short name or full name of product or services sold.
Output Format:
{{'PRODUCT': [list of entities present]}}
If no entities are presented in any categories keep it None
Examples:
1. Sentence: Apple Inc. is an American multinational technology company headquartered in Cupertino, California. Apple is the world's largest technology company by revenue, with US$394.3 billion in 2022 revenue. As of March…
Output: {{'PRODUCT': ['mobile phone', 'personal computer', 'computer', 'microcomputers', 'Macintosh', 'iMac', 'iPod', 'iPhone', 'iPad']}}
For each example added to the template, we increased the enumeration, from 1 onwards. For inference or prediction rather than training, we just use an empty product set. We note that the GPT model available in openAI always has some randomness in it, and one cannot therefore obtain the same exact output with two identical inputs[See e.g., the description of this non-determinism behaviour in <https://platform.openai.com/docs/guides/gpt/faq>.]. Nonetheless, we have verified that the conclusions derived from the GPT models were robust to this kind of built-in randomness[As described in openAI's chat completion API in <https://platform.openai.com/docs/api-reference/completions>.]. In order to be able to assess the performance of GPT versus the base NER models, we used a confusion or error matrix and the F-score <cit.>, as it is standard in the industry. We first calculated the relevant elements of the confusion matrix, the number of true positives (TP), i.e., the number of predicted product/services that we had in advance classified by hand as real product/services, the number of false positives (FP), i.e., the number of predicted product/services that we had in advance classified as not product/services and the number of false negatives (FN), i.e., the number of in advance classified as product/services that the predicted set did not contain. The precision and recall, and the traditional (or balanced) F-score (which is defined as the harmonic mean of the precision and the recall) can then be formulated <cit.> as:
Precision = TP/TP+FP,
Recall = TP/TP+FN,
F-score = 2 ·Precision·Recall/Precision+Recall = 2 ·TP/2 ·TP+FP+FN,
where the precision measures the portion of relevant product/services within the predicted set, and recall the portion of relevant product/services within the benchmark annotated set that were actually predicted. The F-score then measures the overall performance of the predictors. We notice that we do not consider, for the counting of TP, FP and FN, any case where the entity for the word is “O”, that is, in NER speak, a non-entity. This is because, as it is rare to have a product/service word in the text, if we counted a prediction of non-entity, that has been annotated as a non-entity, as a true positive, then the performance would be artificially high – one would need just to predict that everything is a non-entity, and that there was no product/service in the text, to get a high score, which is not what we are aiming for, obviously.
The data set, as standard in machine learning models, is divided into a training and test sets, randomly, across all combinations. The training set within the spaCy models consists of annotated by hand texts while for the GPT model consists of the template with the text plus a clearly demarcated set of product/services.
We have focused on a very small set of companies[There is, worldwide, an estimated 328+ million companies (as of 2021) according to <cit.>. Of those, an estimated 58200 are public companies (as of Q1 2022) according to <cit.>. From those listed companies, not all have an explicit company's page in Wikipedia. By using the company name and the Wikipedia page ID, we have identified at least 3890 public companies with a Wikipedia page, for which we extracted the summary, to be used in the out-of-sample testing and in the building of example peer groups for our (annotated set) of 13 companies.] mainly for 3 reasons. First, and foremost, GPT from openAI
is not a free product, it is a commercial product, with a very limited free trial[Limited in time, number of runs and number of words/tokens.]. Second, we wanted to try to perform few-shot learning for named entity recognition <cit.>, as annotation is expensive and time consuming. Third, GPT from opeanAI, in its current configuration only allows a maximum of 4096 (gpt-3.5) and 16384 (gpt-3.5-turbo-16k) tokens[The operation of tokenization transforms a sentence or text into a set or list of tokens, demarcations of groups of characters, usually words.] or words for each run, and this seriously impairs the ability to run a larger number of training examples. Fourth, and finally, the results, as we shall see later, were already quite good with a few training examples. So, we have first trained/tested on 13 fully annotated examples, and then used a much larger list of around 3890 publicly listed companies for which we could find a Wikipedia Page ID and therefore a Wikipedia page with a summary text[We note that many of these companies Wikipedia pages have extra structured information, such as the industry classification, the main exchange ticker, even the product/services themselves. However, not all have them, and the format can vary a lot, making it very difficult to extract these data elements uniformly. The summary, by contract, is present in almost all companies' Wikipedia websites.].
The first result we have noticed is that even with no examples (zero examples in the template, so a case of the so-called zero-shot learning <cit.>), GPT was able, straight out of the “box”, to predict quite a few product/services. As an example, for the company “Sociedad Química y Minera de Chile S.A”, we can see below that it predicts quite well (the “set product benchmark” is our own annotation, while “predicted set product from GPT” are the results from GPT 3.5).
Sociedad Quimica y Minera de Chile (SQM) is a Chilean chemical company and a supplier of plant nutrients, iodine, lithium and industrial chemicals. It is the world's biggest lithium producer.SQM's natural resources and its main production facilities are located in the Atacama Desert in Tarapaca and Antofagasta regions.
set product benchmark ['iodine', 'plant nutrients', 'chemical', 'industrial chemicals', 'lithium']
predicted set product from GPT {{'PRODUCT': ['plant nutrients', 'iodine', 'lithium', 'industrial chemicals']}}
f_score=0.88
The second result was that GPT does not seem to be confused with the product / service that the company does with the market for what the product / services is meant for. For example, for the company “Garmin Ltd”, on a GPT run with 3 training examples, it extracted the following words as product/service:
…Schaffhausen, Switzerland.The company specializes in GPS technology for automotive, aviation, marine, outdoor, and sport activities. Due to their development …
It marked the “GPS technology” as the correct annotated product/service, and ignored correctly the words
“automotive”, “aviation”, “marine”, “outdoor”, and “sport”. We suspect that it understands (via its statistical probability model) that the word “for” means the “market of”, rather than the product/service itself. This was quite impressive compared to the base spaCy model, which can return e.g., “aviation”, “activity tracker”, “marine”, “GPS technology”, “automotive”, 'smartwatch consumer”, so, getting confused between product/services and their markets.
A further result we were surprised to see was that GPT was able to disambiguate some text such as “…property and asset management …” (for company “Colliers”) into two product/services: “property management”, and “asset management”. This would be quite impressive for a human to do it, but for GPT to be able to do this, although not in every single run/example, was quite surprising.
Our main result is the direct comparison of the performance of the base spaCy models, representing a standard form of doing NER, with the GPT LLM models. We have run the empty English model 'en', and the built-in spaCy 'sm' (small), 'md' (medium) and 'lg' (large) models, against openAI's GPT (gpt-3.5-turbo) for the number of training examples from 0 to 9. The results of the F-score, our standard measure of performance, are depicted below in Figure <ref>.
The results show that the GPT model clearly outperforms the standard spaCy models, by quite a large margin, consistently, across all the parameter space depicted, i.e., the number of examples within the training set. The best averaged result was around F-score=85% for the GPT model, which indicates that the GPT model can probably already be used for real world cases, that is, to obtain comparable companies peer groups in equity valuation. We have stress tested both the GPT and the spaCy models extensively, and concluded that both models results were quite robust, with the value of the F-score barely changing for different parameter model changes[For the GPT model we have stress tested using lower case for all text and product/services, changing the random seed, changing the “temperature” parameter which controls the randomness of GPT output, fine tuning the parameters “max_tokens”, “top_p”, “presence_penalty” and “frequency_penalty” <cit.>. We also tested adding empty texts with empty product/services sets to avoid the so-called LLM hallucination problem <cit.> but saw no clear improvement.
For the spaCy models we have stressed tested also several parameters, such as using lower case for all text and product/services, the number of iterations of the learning algorithm, the “dropout” rate, which allows for some randomness and generalization, the optimization algorithm itself, the parameters of the Adam algorithm such as
“learn_rate”, “beta1”, “beta2”, “eps”, “L2”, “L2_is_weight_decay”, “grad_clip” <cit.>.].
In order to be able to use the results for constructing company's comparable or peer groups, one more step was needed, that is, to use a much larger set of out-of-sample data[Out-of-sample data is a set of data the machine learning models have not seen, and are not annotated, so on that data we can only do a prediction using the trained model on what the product/services should be.] and find which companies are closest, in the sense of having the highest number of same product / services. The data set is described below in table <ref>.
In order to obtain comparable companies for the companies in our annotated set, first we trained GPT with all 13 companies in the training template[We note that for gpt-3.5-turbo the maximum number of tokens is exceeded when using all 13 companies in the annotated set. Therefore, we had no choice but to use gpt-3.5-turbo-16k which is even more expensive that gpt-3.5-turbo but has the same LLM model and allows four times number of tokens.]. We have decided, given the cost of GPT and the results we had already obtained, i.e., that most extra parameters do not seem to improve the performance, to just use GPT (gpt-3.5-turbo-16k) with its default settings. Below we show a couple of examples of what the results can be.
On table <ref> we can see that the results seem, qualitatively, quite good. All companies identified as within the peer group are energy companies, with similar product/services.
On table <ref> we can see what happens when a company with a more niche market, such as Iridium, which trades on satellite communications is used as a target company for which to find a peer group. The number of word matches is much lower (mostly related to the word/token “satellite”) and therefore the quality decreases. Nonetheless, it is still qualitatively a reasonable peer group.
Therefore, the results seem to show that this method could be used for creating company's comparable peer groups, and that the results seem reasonably good.
We have used Wikipedia data as we wanted to use exclusively publicly available web based data. However, if one would could use the data sets from commercial data providers such as S&P, Bloomberg, FactSet, Reuters or others <cit.>, it would allow to build a company's peer group automatically for all public (and private companies). Therefore, this would then industrialize one of the last elements of a company valuation pipeline to be automated. We have used 3890 companies, which was what we could find manually on Wikipedia, but we note that commercial data providers have clean, curated and structured data for all 58200+ publicly listed companies and a large number of the private companies as well. We also emphasise that doing the data analysis on all public companies and implementing a full peer group selection using GPT would imply quite a monetary cost, likely to only be able to be sponsored by a commercial fintech or investment banking house, as the usage would exceed easily what can be done with research based GPT free trials and research budgets.
Finally, we note that in the future we plan to use a company valuation model from e.g., a software house specializing in valuation, to be able to prove that the product/services obtained from GPT text-based NER can be used to create a better company peer groups with more accurate valuations (e.g., as compared with the average stock market price derived valuation). However, given the number of companies worldwide, any kind of large-scale usage of our approach would require some investment, as GPT from openAI is currently not free and is therefore expensive for any larger than small research projects utilization.
§ CONCLUSION
Using companies' description/summaries from publicly available Wikipedia data, we have shown quantitatively that using large language models (LLMs) such as GPT, results in a much higher performance and success rate than standard named entity recognition (NER) which uses manual annotation and systems such as spaCy. We have shown this in the specific case of product/services entity extraction. Furthermore, we shown, qualitatively, that this entity extraction by GPT models can be used to create company's peer groups that look reasonable and consistent. This is suggestive that these LLM models could, in the future, be used for helping the automation of company's peer group construction, and therefore the full automation of the company valuation pipeline.
§ DECLARATIONS
* Funding: The author and this research was fully self-funded by the author.
* Conflict of interest/Competing interests: None
* Availability of data and materials: All data sets used were from publicly available websites on the Internet (including Wikipedia) and their sources are referenced/cited in the text.
|
http://arxiv.org/abs/2307.04721v1 | 20230710173213 | Large Language Models as General Pattern Machines | [
"Suvir Mirchandani",
"Fei Xia",
"Pete Florence",
"Brian Ichter",
"Danny Driess",
"Montserrat Gonzalez Arenas",
"Kanishka Rao",
"Dorsa Sadigh",
"Andy Zeng"
] | cs.AI | [
"cs.AI",
"cs.CL",
"cs.RO"
] |
Quark/Gluon Discrimination and Top Tagging with Dual Attention Transformer
Minxuan Hee1,addr1,addr2
Daohan Wange2,addr3
==========================================================================
Suvir Mirchandani1,
Fei Xia2,
Pete Florence2,
Brian Ichter2,
Danny Driess2 3,
Montserrat Gonzalez Arenas2,
Kanishka Rao2,
Dorsa Sadigh1 2,
Andy Zeng2
1Stanford University,
2Google DeepMind,
3TU Berlin
<https://general-pattern-machines.github.io>
We observe that pre-trained large language models (LLMs) are capable of autoregressively completing complex token sequences – from arbitrary ones procedurally generated by probabilistic context-free grammars (PCFG), to more rich spatial patterns found in the Abstract Reasoning Corpus (ARC), a general AI benchmark, prompted in the style of ASCII art. Surprisingly, pattern completion proficiency can be partially retained even when the sequences are expressed using tokens randomly sampled from the vocabulary. These results suggest that without any additional training, LLMs can serve as general sequence modelers, driven by in-context learning. In this work, we investigate how these zero-shot capabilities may be applied to problems in robotics – from extrapolating sequences of numbers that represent states over time to complete simple motions, to least-to-most prompting of reward-conditioned trajectories that can discover and represent closed-loop policies (a stabilizing controller for CartPole). While difficult to deploy today for real systems due to latency, context size limitations, and compute costs, the approach of using LLMs to drive low-level control may provide an exciting glimpse into how the patterns among words could be transferred to actions.
§ INTRODUCTION
Large language models (LLMs) are trained to absorb the myriad of patterns that are woven into the structure of language. They not only exhibit various out-of-the-box capabilities such as generating chains of reasoning <cit.>, solving logic problems <cit.>, and completing math puzzles <cit.>, but also have been applied in robotics where they can serve as high-level planners for instruction following tasks <cit.>, synthesize programs representing robot policies <cit.>, design reward functions <cit.>, and generalize user preferences <cit.>. These settings rely on the few-shot in-context examples in text prompts that specify the domain and input-output format for their tasks <cit.>, and remain highly semantic in their inputs and outputs.
r0.245
< g r a p h i c s >
LLMs out-of-the-box can complete (highlighted) complex ARC patterns <cit.> expressed in arbitrary tokens.
A key observation of our work – and perhaps contrary to the predominant intuition – is that an LLM's ability to represent, manipulate, and extrapolate more abstract, nonlinguistic patterns may allow them to serve as basic versions of general pattern machines. To illustrate this idea, consider the Abstract Reasoning Corpus <cit.>, a general AI benchmark that contains collections of 2D grids with patterns that evoke abstract concepts (infilling, counting, and rotating shapes). Each problem provides a small number of input-output examples, followed by test input(s) for which the objective is to predict the corresponding output. Most methods (based on program synthesis) are manually engineered with domain-specific languages <cit.> or evaluated on simplified extensions or subsets of the benchmark <cit.>. End-to-end machine learning methods only solve a handful of test problems <cit.>; however, our experiments indicate that LLMs in-context prompted in the style of ASCII art (see <ref>) can correctly predict solutions for up to 85 (out of 800) problems
– exceeding some of the best performing methods to date <cit.>, without additional model training or fine-tuning. Surprisingly, we find this extends beyond ASCII numbers, and that when they are replaced with a mapping to randomly sampled tokens in the vocabulary, LLMs can still generate valid solutions. These results suggest an intriguing insight: that LLMs may exhibit more general capabilities of representing and extrapolating symbolic patterns, invariant to the specific tokens involved.
This is in-line with – and complementary to – recent observations that using random or abstract label mappings for in-context classification retains some performance compared to ground-truth labels <cit.>.
We hypothesize that the capabilities that drive pattern reasoning on the ARC may allow general pattern manipulation at various levels of abstraction useful for robotics and sequential decision making <cit.>, wherein a diverse array of problems involve patterns that may be difficult to reason about precisely in words. For example, a procedure for spatially rearranging tabletop objects could be represented using arbitrary tokens (see <ref>). As another example, optimizing a trajectory with respect to a reward function can be framed as extrapolating a sequence consisting of state and action tokens with increasing returns.
Orthogonal and complementary to efforts that develop multi-task policies by pre-training on large amounts of robot data <cit.>, or robotics foundation models <cit.> that can be fine-tuned for downstream tasks <cit.>, our goal is instead to (i) assess the zero-shot capabilities that LLMs may already contain to perform some degree of general pattern manipulation, and (ii) investigate how these abilities can be used in robotics. These capabilities are certainly not sufficient to replace specialized algorithms; nonetheless, they are useful to characterize, and doing so may help inform priorities for training generalist models in robotics.
We assess LLMs as pattern machines categorized into three areas: sequence transformation, sequence completion, and sequence improvement (see <ref>). First, we show that LLMs are capable of generalizing certain sequence transformations of increasing complexity with a degree of token invariance, and posit that this can carry over to spatial reasoning capabilities in robotic tasks. Next, we assess LLMs' ability to complete patterns from simple functions (sinusoids) and show this can be applied to robotic tasks like extending a wiping motion from kinesthetic demonstrations, or drawing patterns on a whiteboard. The combination of in-context sequence transformation and extrapolation further enables LLMs to do basic forms of sequence improvement. We show that providing reward-labeled trajectories as context, coupled with online interaction, can enable an LLM-based agent to learn to navigate through a small grid, discover a stabilizing CartPole controller, and optimize simple trajectories via human-in-the-loop “clicker” reward training. Code, benchmarks, and videos will be made available at <https://general-pattern-machines.github.io>.
§ RELATED WORK
Pattern reasoning by prompting pre-trained LLMs with few-shot input-output examples is driven by in-context learning <cit.>. The examples serve as a form of task specification, where the model is expected to complete further instances of the task by simply predicting what comes next. In-context learning extends the concept of “task prefixes” (predefined task-specific token sequences <cit.>), but swapped in with actual task examples instead. <cit.> observes that it improves (in particular, out-of-distribution generalization) from scaling model size. This is in contrast to scaling models for pre-training + fine-tuning, which has been shown to not necessarily improve OOD generalization on language tasks <cit.>. Nonetheless, despite compelling OOD generalization abilities, in-context learning still comes at a cost, as it continues to lag behind in terms of absolute performance on benchmarks compared to task-specific fine-tuning <cit.>.
In-context learning is explicitly trained for by packing examples from the same task and dataset into the same context buffer that is fed as input to an LLM with an unsupervised autoregressive objective <cit.>, sometimes referred to as meta-training. However, it can also emerge implicitly from training on unsupervised datasets where tokens exhibit a Zipfian distribution <cit.> on Transformer architectures, but not necessarily with recurrent architectures (vanilla RNNs or LSTMs) <cit.>. Other works have shown that in-context learning with Transformers can learn simple function classes on par with least squares <cit.>, and can generalize to a seemingly unbounded number of tasks (when trained on tasks from the same task family) better than multitask MLPs <cit.>, with Bayesian interpretations of this phenomenon <cit.> <cit.>.
In-context learning occurs during inference without gradient updates to the weights of the model, and can be differentiated from in-weights learning, which relies on information stored in the weights of the model during LLM training <cit.> (and can be useful for completion tasks such as “Abraham Lincoln was born in ”). <cit.> observes that generalization of in-context learning can be characterized as more “exemplar-based” (on the basis of similarity to in-context examples <cit.>), as opposed to generalization of in-weights learning which tends to be more “rule-based” (on the basis of minimal features that support category boundaries in the training data <cit.>). The vast capabilities of LLMs <cit.> have been driven by a combination of both forms of learning. In this work, we are particularly interested in in-context learning, and (depending on the task) using the semantic priors of numeric tokens (“0” to “100”) to drive new capabilities such as in-context sequence completion (<ref>) and improvement (<ref>).
LLMs have been applied across a number of areas in robotics – most recently in decomposing high-level task domain descriptions in natural language to mid-level step-by-step plans <cit.>, robot code <cit.>, and planning domain definition languages <cit.>. These methods leverage the semantic priors stored in LLMs to compose new plans or parameterize primitive APIs, but whether LLMs can directly influence control (at the level of trajectories) in a zero-shot manner remains an open problem. As a reaction to this, we investigate how the pattern reasoning capabilities of LLMs may drive various control tasks, to extend or optimize low-level action sequences. While it is possible to explicitly train models for these capabilities <cit.>, this work instead focuses on the inherent abilities of LLMs out-of-the-box, which may have downstream implications for the role of language pre-training for building generalist embodied AI systems. Our findings may also benefit domains where data collection is expensive or difficult to scale. Closely related to our work is <cit.>, which uses an LLM to represent a rollout-policy and world-model in-context, and then uses model-based Q-learning to drive policy improvement across a collection of toy environments with linguistic representations. Our use of LLMs for sequence improvement can be seen as a simplification of in-context policy iteration that supports both learning from demonstrations and in-context RL, driven by the generality of LLMs as pattern machines.
§ LANGUAGE MODELS AS GENERAL PATTERN MACHINES
The capacity of LLMs to act as general pattern machines is driven by their ability to perform in-context learning on sequences of numeric or arbitrary tokens.
An LLM typically represents sequence modeling autoregressively, with a decoder-only Transformer <cit.>, by factorizing the probability of a sequence x, which is a sequence of symbols (s_1, …, s_n), into the product of conditional probabilities p(x)=∏_i=1^np(s_i|s_1, ..., s_i-1).
To perform in-context learning, the model can be conditioned with a prompt that provides the initial tokens in the sequence s_1:k = (s_1, …, s_k) and uses the model to complete s_k+1:n.
The adaptability of in-context learning lies in the amount of flexibility that can be packed into s_1:k – this prompt sequence can itself contain many sequences, each an input-output pair, and perhaps additional task conditioning <cit.>. Specifically, a model can in-context learn to complete a prompt which is a set of N examples s_1:k =(x^1, x^2, …, x^N) where each x^i is a variable-length sequence (s^i_1, s^i_2, ..., s^i_m^i).
Rather than investigating in-context learning with natural language tasks <cit.>, in this work we are interested in investigating more abstract notions of non-linguistic patterns. The following sections evaluate these capabilities across LLMs, and show how they can be used in robotics. By varying the notion of what each x^i should be, we can characterize in-context pattern learning capabilities into the following 3 categories.
* Sequence Transformation (<ref>): each x^1, …, x^N-1 is a sequence-to-sequence input-output pair; x^i = (x^i_input, x^i_output), each subsequence of variable length,
and x^N is the query input (x^N_input).
* Sequence Completion (<ref>): rather than containing input-output pairs, and rather than containing many examples of different sequences, the prompt x = (s_1, ..., s_k) corresponds to discrete samples from a single function, of the form s_i = a ·sin(bi), which can be extrapolated.
* Sequence Improvement (<ref>): each x^1, …, x^N-1 is a collection of trajectories (potentially labeled with corresponding total rewards), and x^N prompts the model to “improve” the sequences by inferring a better one, with least-to-most prompting <cit.> –
this process can be iterative and applied to a variety of formulations, offline trajectory optimization or online in-context reinforcement learning.
§ SEQUENCE TRANSFORMATION
LLMs are capable of in-context learning the distribution of functions that represent sequence transformations by completing abstract patterns observed among examples of input-output sequences x^i = (x^i_input, x^i_output) of arbitrary tokens, each drawn from a fixed alphabet 𝒜. For example, suppose that we are given a string of input-output examples such as “ 5 3 0, 3 5; 7 6 1, 6 7; 9 2 3, 2 9; 4 8 5,”. Here 𝒜 consists of tokens that represent space-prefixed digits 0–9, a comma token to separate inputs from outputs, and a semi-colon token to delineate examples from each other. A general pattern machine should infer the completion “ 8 4” by recognizing that the pattern is to swap the first 2 tokens, then remove the 3rd.
r0.41
Method Total (of 800)
(d3) text-davinci-003 85
(d3) w/ random 𝒜 ^†44±6
(d2) text-davinci-002 <cit.> 64
(p) PaLM <cit.> 42
(d1) text-davinci-001 <cit.> 11
(d1) finetuned 9
Ainooson et al., 2023 <cit.> ^**130
Kaggle 1st Place, 2022 ^*64
Xu et al., 2022 <cit.> ^*57
Alford et al., 2021 <cit.> 35
Ferré et al., 2021 <cit.> 32
^*Reported from <cit.> out of 160 object-oriented problems.
^†Numbers averaged across 5 randomly sampled alphabets.
^**Based on brute force search over a rich hand-designed DSL.
LLMs out-of-the-box can solve a non-trivial number of problems on the ARC, competitive with the best existing methods using handcrafted domain-specific languages <cit.>.
We use the ARC benchmark <cit.> to evaluate LLMs on such sequence transformations, whereby token patterns are substantially more complex, covering a wide range of abstract spatial tasks: infilling, counting, translating and rotating shapes, etc. Each task comes with several input-output examples (3.3 on average), and 1-3 test inputs which can be represented as 2D grids. Sizes between inputs and outputs may differ and are not provided beforehand, thereby adding to the difficulty of applying standard machine learning algorithms, which typically assume fixed size. Autoregressive LLMs can be used for the ARC by flattening the grids and predicting each new output grid item in row-major order, which naturally supports variable length outputs. While LLMs are not originally trained for rasterizing spatial outputs in this way, we hypothesize that a general pattern machine would be capable of implicitly recognizing the long-range dependencies between rows (using positional encoding as a bias <cit.>) to pick up patterns that extend across the 2nd dimension.
Result: ARC benchmark. Our experiments in <ref> show that LLMs (PaLM, InstructGPT series in acronyms d1 - d3) prompted with input grids represented as tokens drawn from an alphabet of digits, can correctly infer solutions for up to 85 problems. Surprisingly, this outperforms a number of recent systems <cit.> based on program synthesis that use manually engineered domain-specific languages (DSLs). While LLMs have yet to surpass brute-force search <cit.> to compose functions from a handcrafted API of grid operators, LLMs are perhaps the best performing generalist method that exists today. (We address the important caveat that parts of the ARC may be present in the training data of LLMs later in this section.)
Observation: consistent tokenization matters. The ARC can be found among the suite of tasks in BIG-Bench <cit.>, but has often been overlooked since many language models appear to perform poorly (near or at zero performance). We observe this occurs due to the formatting of the benchmark, where grid elements are represented as neighboring characters in a string “8686” (instead of “ 8 6 8 6”). While subtle, this difference is enough for certain Byte-Pair Encoding (or SentencePiece) tokenizers <cit.> (that do not tokenize per digit) to group together multiple grid elements (“8” and “6”) into a single token (“86”) which maps to a different token embedding altogether in the vocabulary. This causes inconsistencies with how the patterns are expressed at the token level. For example, given a task expressed in a string “8686, 6868; 7979,” if the LLM tokenizer groups together pairs of digits 86, 68, 79, respectively, then the sequential inductive patterns of the task (to swap and repeat individual digits) is lost. A simple work-around is to directly pass token indices or embeddings to the language model, or use token alphabets unlikely to be grouped by the tokenizer. This work-around generalizes to other pattern manipulation tasks beyond the ARC; in general, it is important to tokenize in a manner that is consistent with the pattern being represented.
Observation: token mapping invariance. The hypothesis that LLMs can serve as general pattern machines stems from the observation that they can surprisingly still solve a non-trivial number of ARC problems using alphabets 𝒜 sampled randomly from the LLM's token vocabulary. For instance, given a particular alphabet:
{
8 ↦ falls,
6 ↦ +#,
7 ↦ Ul,
9 ↦ Chev,
3 ↦ ,
2 ↦ 2010},
a pattern machine at sufficient proficiency can be expected to complete the prompt
“falls +# falls +#, +# falls +# falls; UI Chev UI Chev, Chev UI Chev UI; 2010 2010,”
by predicting
“ 2010 2010 ”.
For example, text-davinci-003 <cit.> with the following mapping
𝒜={
0 ↦ offence,
1 ↦ Subject,
2 ↦ Lub,
3 ↦ Fail,
4 ↦ Chev,
5 ↦ symb,
6 ↦ swung,
7 ↦ Ul,
8 ↦ escalate,
9 ↦ Chromebook}
solves 52 ARC problems, and across 5 different random alphabets solves an average of 43.6 problems. Interestingly, we find that token mapping invariance holds to an extent on simple pattern transformations for randomly sampled embeddings as well (i.e., such that embeddings are not associated with any token in the vocabulary; see Appendix).
The implications of token mapping invariance are two-fold.
First, note that it is possible that parts of the ARC (and other static examples of pattern transformations) are present in the training data of an LLM (due to contamination).
Therefore, measuring the performance of LLMs under random alphabets may provide a closer estimate of their true underlying in-context sequence transformation capabilities. (As additional evidence that LLMs' sequence transformation ability is not simply due to memorization, we also provide a new procedurally-generated pattern transformation benchmark which we describe below.)
r0.55
< g r a p h i c s >
Example LLM prediction as an in-context grasp detector (top) and a simple forward dynamics model (bottom).
Second, we hypothesize that the pattern manipulation capabilities which token invariance implies could help to drive positive transfer from patterns learned across Internet-scale language data to new modalities or symbolic representations for robot reasoning. As an example of this idea, (i) <ref> (top) shows a grasp (Skittles) detector which outputs target coordinates within a downsampled image (with 6 in-context examples), and (ii) <ref> (bottom) shows spatial rearrangement via predicting simple forward dynamics where the red bowl moves to the green plate (with 9 in-context examples of downsampled images as inputs and outputs).
The generality of what the arbitrary tokens could represent may allow pattern transformation capabilities – especially as LLMs improve – to be leveraged at various levels of abstraction in robotics (including at the level of pixels or robot joint positions). Incorporating more semantic priors into representations may also boost performance and enable further LLM-driven reasoning (reducing visual data into more semantic spatial representations). It may also be possible to search for the “optimal” token alphabet for a particular setting with gradient-free optimization, but we leave this to future work.
r0.41
Method Accuracy (%)
(d3) text-davinci-003 75
(d3) w/ random 𝒜 ^†58 ± 1
(p) PaLM <cit.> 74
(d2) text-davinci-002 <cit.> 69
(d1) text-davinci-001 <cit.> 60
(c1) text-curie-001 54
(b1) text-babbage-001 50
(a1) text-ada-001 39
^†Numbers averaged across 5 randomly sampled alphabets.
LLMs of varying sizes are capable of completing patterns procedurally generated with PCFG, averaged over a range of k and w.
Result: PCFG benchmark. The ARC is a difficult benchmark, and the performance falloff can be steep (and relatively uninformative) across LLMs with decreasing model size and data scale, making it difficult to measure incremental progress towards better pattern machines that could be used for sequence transformation in robotics. Therefore, we introduce a new adjustable-difficulty benchmark, where the transformations are procedurally generated using the probabilistic context-free grammar (PCFG) in <cit.>.
These transformations include a collection of lexical rules that may be composed (reverse, shift, swap, repeat, etc.) over the tokens in the input sequence x^i_input to generate x^i_output (see Appendix).
Example composed transformations are given in <ref>.
The complexity of these transformations can be controlled by varying the number of tokens k used to express sequences x^i=(s_1,…,s_k), and increasing the number of lexical rules w used to define the transformation. This is simply the identity function when w=0, and progressively appears more complex (and more random) as w→∞. <ref> aggregates PCFG pattern completion accuracy across different LLMs over sequence length k = [1, 2, 4, 8, 16, 32] and complexity w = [0, 1, 3, 7, 15, 31], each with 100 runs.
In the Appendix, we show results for different k, w combinations to illustrate the way in which accuracy decreases as either k or w increases.
This benchmark provides a more unbiased evaluation of pattern reasoning capabilities in LLMs; PCFG completion accuracy improves with model scale, and correlates with ARC performance. We use PCFG for evaluation only (rather than for training data <cit.>) such that one can measure how pre-training regimes or modalities may improve general pattern recognition and completion capabilities across sequence transformations.
We will release the PCFG benchmark.
§ SEQUENCE COMPLETION
In this section – complementary to transformations (<ref>) – we assess if LLM pattern reasoning can extend to settings where an LLM predicts a continuation of time series data points generated by a simple function class. We then demonstrate that such sequence completion can be operationalized on real robots to extend partial demonstrations of simple periodic motions. In this setting, the input context consists of a sequence x = (s_1, …, s_k) which packs a series of l ≤ k discrete samples from a function f. For example, the sequence of tokens “ 1 2, 1 2, 1 2” may represent l=3 samples from a constant vector-valued function f that outputs (1, 2). We use the LLM to extrapolate f by predicting s_k+1, …, s_n autoregressively.
Completion of sinusoids. We start with a simple example where LLMs extrapolate a function of the form f(x) = a ·sin(bx). As in <ref>, tokenization matters; we found it effective to discretize outputs among integers 0–100, as these integers are represented by single tokens in the tokenizers of the LLMs we tested.
r0.5
< g r a p h i c s >
figureLLMs (text-davinci-003) can extrapolate various functions y=a·sin(bx) (top row), y=ax·sin(bx) (middle row), and y=a/2^xsin(bx) (bottom row) given amounts of context. Overall, larger models make better predictions with lower error rates (right column). More context also helps prediction accuracy (light vs. dark).
<ref> shows completions of the sine wave by text-davinci-003 over 11 trials given 3 and 5 periods as context, as well as average distance (computed by Dynamic Time Warping) of the generated predictions to the ground truth function values across several LLMs.
Multiple LLMs produce near-perfect continuations of the sine wave, especially with more context (more periods of the sine wave). We additionally test the function family ax ·sin(bx) – in which the amplitude of the oscillations increases with x-values. Here, the LLM must extrapolate to new values unseen in the context, which highlights the utility of using a metric space for the outputs (0–100) where the LLM has priors over the scale of the different tokens. These functions also contain a “meta-pattern”: the y-values increase, decrease, and then increase in a single period – and the amplitude of the function also increases over time.
We also test the function a/2^x·sin(bx), reminiscent of a stabilizing controller. Across these three functions, we observe that greater context and larger scale LLMs yield higher quality predictions.
r0.35
< g r a p h i c s >
LLM trajectory predictions Table Sweeping improve with larger models.
Completion of periodic motions. We emphasize that the Sequence Completion capability above is domain-agnostic – we do not use any specialized prompts explaining what function should be completed, nor do we provide any linguistic grounding for the metric tokens. We can therefore operationalize this zero-shot capability of LLMs to simple open-loop motion extrapolation problems in robotics, by encoding a series of positions sampled from a demonstration, and predicting future positions. We test two simple tasks on a mobile robot manipulator: Table Sweeping and Whiteboard Drawing (both shown in <ref>).
In Table Sweeping, the goal is to continue a human-provided kinesthetic demonstration of sweeping a portion of a table (see middle <ref>). We encode the demonstration as a series of end-effector poses at approximately 3 Hz. Each demonstration lasts roughly 20-30 seconds. We represent the 7 DoF end-effector pose as a concatenation of Cartesian position and the quaternion, where each value is binned to an integer between 0 and 100, and the dimensions are delimited by spaces. We collect 30 demonstrations that demonstrate the sweeping motion. Note that demonstrations in this task are noisier and higher dimensional than the stylized sinusoid functions above. For each demonstration, we construct a context to consist of the first two-thirds of the provided demonstration, and treat the last one-third as the ground truth for the LLM to predict. Larger models quantitatively perform better with generally lower variance (see <ref>).
In Whiteboard Drawing, the goal is to continue a scripted demonstration of drawing loops on a whiteboard (see <ref>). Loops are defined by parametric equations of the form x = a_x cos(bt) + d_x and y = a_y sin(bt) + c_y t + d_y. We execute the motions using position control and record the end-effector positions at 5 Hz, then discretize states in between 0 and 300, as finer motion is needed for this task. We provide part of the loop pattern in-context, and assess the ability to extrapolate from 2 loops to do a third loop. LLMs, text-davinci-003 perform well – we show completions with different loop styles in the Appendix.
§ SEQUENCE IMPROVEMENT
In the previous two sections, we have investigated the ability of LLMs to in-context learn sequence transformations, and the ability to extrapolate simple periodic function classes from partial sequences, which enables them to complete some partial demonstrations that exhibit a pattern.
In this section, we explore the synergies between sequence transformation and completion –
and investigate improving a sequence, such as trajectories in a sequential decision process, along some metric, such as a reward function.
Here, we use an LLM to generate new sequences x^N conditioned on previous sequences (x^1, …, x^N-1), which can represent previous iterations of the same sequence (or policy it represents).
The improvement can also be return-conditioned, given a reward (or cost) function r(·). By inserting as the first token(s) of each sequence its corresponding total reward x = (r(x), s_1 , ..., s_k), we can prompt the model at to conditionally “improve” by “just asking” <cit.> for a higher reward than those seen in-context (prompting LLMs out-of-the-box to act as Decision Transformers <cit.>). New “rollouts” of the sequences can return new reward labels that then replace the original desired rewards with actual rewards.
Iteratively performing this inference and accumulating more trajectories may jointly use the model's general notion of pattern transformation and extrapolation to perform improvement of sequences, which can be represented by numeric or symbolic tokens.
Note that there are practical considerations, depending on the task or model, not all sequences can fit in context, so options could be to keep only the most recent, or the ones with the highest rewards if available (we refer to the Appendix for more discussion of the nuances here).
In this section, we perform a series of targeted experiments on simple tasks, aiming to explore the possibility of using pre-trained LLMs for sequence improvement in trajectory and policy optimization.
r0.4
< g r a p h i c s >
LLM agents can generate new trajectories with increasing returns for a Marker in Cup task (right). Performance varies with different ways of building the context (left).
Extrapolating simple meta-patterns among trajectories.
Sequence improvement with LLMs enables a simple form of trajectory optimization
for a Marker in Cup task on a Franka Panda robot, where we define the prefixed reward of a trajectory to be the negative distance between the final end-effector position and the cup (normalized between 0–100), and initialize the context with a collection of pre-recorded trajectories (stopping at 20%, 40%, 60%, and 80% of the way to the cup), delimited by newlines and prefixed by rewards (ranging roughly from 70-90; see prompts in the Appendix). For this task, we represent trajectories as sequences of Cartesian positions, each dimension normalized between 0–100. We find that text-davinci-003, to an extent, is able to generalize the pattern and generate a trajectory that achieves a reward >90. For this extrapolation to occur, we observe that the meta-patterns in the context are crucial: in <ref> (left), we compare the average reward achieved by text-davinci-003 over 11 trials (each with a different goal position) given contexts with different orderings of the trajectories (sorted by least-to-most reward, randomly permuted, or with/without reward annotations).
r0.25
< g r a p h i c s >
Average maximum return for LLM agents a1-d3 on Grid compared to random exploration (r).
Sampling higher-reward trajectories online. While LLMs can extrapolate from trajectories that exhibit clear meta-patterns among them, we find that this ability is more limited for less trivial setups. Consider a simple 9×9 Grid navigation environment with a fixed goal position that is randomly placed and a fixed starting position at the center of the grid. Episodes terminate after 20 timesteps, and the return is based on the distance from the agent to the goal at the final time step. This environment is inspired by the Dark Room environment from <cit.> but with a continuous reward function, reducing the exploration challenge.
The agent may take actions (1-5) corresponding to moving right, up, left, down, and no-op. We initialize the context buffer with 20 trajectories of agent grid positions generated by a random policy, sorted by total cumulative rewards. These trajectories exhibit a more complicated meta-pattern than in the Marker in Cup task; we do not find that LLMs can generate trajectories of higher reward immediately. With that said, we can consider an iterative, online setting, in which the LLM acts as an agent that interacts with the environment in a closed-loop fashion. The context consists of the highest reward trajectories in sorted order, appended with a higher reward than was seen in the context, plus states and actions from the current partial trajectory (see Appendix for details). Once an episode terminates, its trajectory is relabeled with the reward achieved, and inserted into the context at the appropriate position. In <ref>, we plot the maximum return attained by a1-d3 over 50 episodes, compared to random exploration, averaged over 5 trials. We find that a1-d1 tend to sometimes “exploit” the suboptimal behaviors represented in the context (which initially contains trajectories with rewards ranging from 6-78), whereas d3 can consistently find a solution to Grid within 50 episodes.
r0.5
< g r a p h i c s >
Different LLM agents (d3 - c1) on average can improve trajectories (total rewards) with more CartPole episodes (left), and discovers “oscillatory behaviors” (right) to keep the CartPole upright (later episodes are brighter).
Result: discovering a simple CartPole controller. We show that using LLMs as agents in an online, closed-loop setting can discover a stabilizing controller for the CartPole environment (where observation tokens consist of pole angle and velocity, normalized to 0–100, actions are 0 (left) and 1 (right), maximum time horizon is 200). <ref> (left) shows that the total reward (number of timesteps the CartPole is kept upright) improves on average across various LLMs over 100 episodes (where the first 100 are generated by random exploration). <ref> (right) shows the evolution of trajectories over episodes of d3, demonstrating that it discovers “oscillatory” behaviors to keep the CartPole upright.
r0.5
< g r a p h i c s >
LLMs can in-context react to sparse reward signals online to encourage an end effector to reach a desired goal.
Result: online human-guided trajectory optimization. LLMs can also react to sparse binary reward signals (subjectively provided by a human) to adjust trajectories online. This is analogous to an implementation of “clicker training” <cit.> used for training dogs, but instead applied to robots.
In this setup, at every time step (2s), the robot executes an action corresponding to a movement of its end-effector in a particular direction. The human observes the action and chooses whether to give a reward (by using the clicker) to encourage or discourage similar behaviors. Episodes reset after 30 seconds, and the first two episodes are generated by random exploration. The (reward, state, action) tuples are added as in-context examples (with negative example followed by positives, and an equal number of each) to generate the next action based on the current state. An example context format is given in the Appendix. As shown in <ref>, applying LLMs' sequence improvement capabilities in this way enables a human to interactively guide the robot to push an object via in-context sequence improvement.
§ DISCUSSION
We are excited about the opportunities of LLMs as pattern machines for robotics – from reasoning and extrapolating complex patterns as a prior for control, to online optimization of closed-loop policies via sequence optimization. These capabilities present several implications, including (i) supplementary perspectives on the role of language pretraining for generalist end-to-end robot learning models <cit.>, and (ii) in-context learning of arbitrary patterns as a driving mechanism for policy improvement. LLMs also show promise for mixed autonomy settings – real-time pattern extrapolation for assistive teleoperation. We expect many these abilities to continue improving as large models expand from learning the patterns within language-only datasets, to multimodal domains (images, videos, etc.). While this work investigates the scope of in-context generalization on fairly simple settings without additional data collection or model training, these capabilities presumably may be significantly improved via domain-specific objectives and finetuning <cit.>.
Limitations & Future Work. Today, the inference costs (and monetary costs) of using LLMs in the control loop is quite high. Predicting the next token for every sequence, every dimension of every time step in a trajectory, involves querying an LLM. State-action spaces which are higher dimensional and/or greater precision also result in longer trajectory representations, and thereby the extent to which they can be extrapolated or sequence optimized is bounded by the context length of models. These limitations may prevent deploying these models on more complex tasks in practice, but could be lifted over time as current efforts in the community continue to drive improvements in LLM quantization <cit.> and inference efficiency <cit.>. As with any other language-only model, LLM-based control may (i) be unpredictable, and (ii) lack visual/physical grounding; thus, it is not currently suitable for application outside of constrained lab settings. We leave the exploration of these important topics for future work.
The authors would like to acknowledge Jie Tan, Peng Xu, Carolina Parada, Alexander Herzog, Jensen Gao, Joey Hejna, and Megha Srivastava for valuable feedback and discussions.
§ SEQUENCE TRANSFORMATION
§.§ Abstract Reasoning Corpus: Additional Details and Examples
In Section 4 of the main paper, we describe how ARC problems require reasoning about a range of different types of pattern operations – infilling, counting, translating and rotating shapes, and more. In <ref>, we show sample problems among the 800 ARC problems for which text-davinci-003 correctly generalizes the pattern shown in a few train examples to a test example. In <ref>, we show sample problems that are not correctly solved by text-davinci-003. In <ref>, we show an example context for an ARC problem encoded as integers.
[
basicstyle=,
backgroundcolor=,
breaklines=true,
caption=Example context format for an ARC problem (only one input-output example is shown, along with a query input.,
captionpos=b,
label=lst:arc-prompt
]
input:
0, 0, 0, 0
0, 3, 4, 0
0, 7, 6, 0
0, 0, 0, 0
output:
3, 0, 0, 4
0, 0, 0, 4
0, 0, 0, 0
0, 0, 0, 0
7, 0, 0, 6
input:
0, 0, 0, 0
0, 5, 6, 0
0, 8, 3, 0
0, 0, 0, 0
output:
§.§ PCFG Benchmark: Additional Details and Ablations
Our PCFG benchmark is a procedurally generated, adjustable-difficulty benchmark for measuring abstract sequence transformation capabilities in LLMs, based on the PCFG from Hupkes et al. 2020. In <ref>, we show illustrations of the primitive operations in the PCFG that can be applied on one or two sequences of tokens. In <ref>, we show independent ablations of sequence length (number of tokens) k and complexity (number of rules) w in the sequence transformations, illustrating the way in which the solve rate decreases as either factor increases. In <ref>, we show an example context for a PCFG problem on integer sequences. We will make the PCFG benchmark available at <https://general-pattern-machines.github.io>.
[
basicstyle=,
backgroundcolor=,
breaklines=true,
caption=Example context format for a PCFG problem (two input-output examples are shown, along with a query input).,
captionpos=b,
label=lst:pcfg-prompt
]
6 7 7 8 1 5 9 8 9, 1 5 9 8 9 7 7 6 6; 4 3 0 3 5 0 2 3 8; 5 0 2 3 8 3 3 4 4; 1 3 3 3 7 0 1 9 9,
§.§ Token Invariance for New Token Embeddings
In Section 4 we have argued that LLMs are, to a certain extent, invariant to the choice of alphabet a pattern is encoded with.
In this section, we present an experiment that investigates this token invariance even further by introducing new token embedding vectors the model has not seen during training.
We sample K many new embedding vectors as Gaussian samples using the mean and 1 or 2 standard deviations of the original LLM embedding matrix statistic in each of embedding vector dimension.
This way, we create a new token embedding matrix, consisting of the newly sampled token embeddings the model has not seen during training and additionally embeddings corresponding to separating tokens (comma, full stop) from the original embeddings.
Fig. <ref> shows the success rate of correctly choosing the target token in a single-token prediction task when using the newly sampled embeddings in comparison with the native embedding matrix.
As one can see, for 1σ noise sampling, the model is able to solve the task with the new embeddings with similar performance as with the native embeddings.
In case of 2σ, the performance degrades.
The results are obtained with K=100, averaged over 3 random seeds when sampling the token embeddings, 30 instances each, and a context length of 5, 10, or 20 examples. The LLM is the 8B parameter variant of <cit.>.
§ SEQUENCE COMPLETION
§.§ Table Sweeping: Additional Details
In Section 5 of the main paper, we demonstrate how sequence completion capabilities can be applied to continuation of partial motions, such as sweeping a table. In <ref>, we show the average DTW distance between predicted and ground truth trajectory completions in the Table Sweeping task, given 66% of the trajectory as context, over 30 trials. Each full trajectory consists of 9 sweeping motions across a table. We compare completions made by various language models. We find that larger models generally perform better; text-davinci-003 performs the best, and also has the lowest variance. On our website, we show qualitative examples of text-davinci-003 completing a table sweeping motion given by a human demonstration.
§.§ Whiteboard Drawing: Qualitative Results
In <ref>, we show example completions for three different loop styles by text-davinci-003 over three trials. The completions generally match the overall shape shown in the two loops given as context. However, the results also qualitatively illustrate that fine motion patterns can be challenging to predict precisely.
§ SEQUENCE IMPROVEMENT
§.§ Marker in Cup: Additional Details
In this task, we use LLMs to generate improved trajectories (according to a reward metric) given a context of trajectories that have increasing returns. For this task, states are Cartesian (x, y, z) positions, with each dimension normalized between 0 and 200, trajectories are series of states that can be executed via position control, and the return of a trajectory is proportional to the negative distance to the goal (cup) plus an offset. We form the trajectories in the context as follows: we take a full trajectory which attains a reward of 100 and construct trajectories that stop moving 20%, 40%, 60%, and 80% of the way to the goal (such that all trajectories are 50 timesteps). We condition the LLM to generate a 100-reward trajectory by prompting it with “100: start state". An excerpt of an example context is shown in <ref>. The results in Figure 5 from the main paper are over 11 trials, each with a different goal position.
[
basicstyle=,
backgroundcolor=,
breaklines=true,
caption=Example context (excerpt) for a Marker in Cup, illustrating the (reward: state, state, state...) format..,
captionpos=b,
label=lst:marker-prompt
]
71: 104 83 123, 104 83 123, ...
72: 104 83 123, 104 83 123, ...
80: 104 83 123, 104 83 123, ...
90: 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 104 83 123, 105 83 123, 105 83 123, 106 83 123, 106 83 123, 107 83 123, 108 83 122, 109 83 122, 110 83 122, 111 83 121, 112 82 120, 113 82 119, 113 82 118, 114 81 118, 115 81 117, 115 81 116, 115 80 115, 116 80 114, 116 80 113, 117 79 112, 117 79 111, 118 79 110, 118 78 109, 118 78 109, 118 78 109, 118 78 109, 118 78 109, 118 78 109, 118 78 109, 118 78 109, 118 78 109, 118 78 109, 118 78 109, 118 78 109, 118 78 109, 118 78 109
100: 104 83 123
§.§ Grid: Additional Details
In the Grid environment, observations are x, y positions represented by integers 0–8 for each coordinate. There are five possible actions (1, 2, 3, 4, 5) corresponding to (right, up, left, down) movement by one space and no-op. A goal is randomly placed in the grid. The agent (which is initialized at the center position) receives a reward of 100 - 10 * distance from the goal to the agent's final position. Episodes terminate after 20 time steps. For our experiments, we limit the context length to 1024 tokens. At each iteration, the LLMs is prompted to generate a trajectory with the maximum seen return from the buffer plus a randomly selected offset of up to 20.
§.§ CartPole: Additional Details
We use a simplified version of the CartPole enviornment in OpenAI Gym. Observations are two-dimensional (corresponding to pole angle and velocity, normalized to 0-100) and the maximum time horizon is 200. There are two possible actions (1, 2) corresponding to (left, right), and the agent gets +1 reward for every time step that the CartPole is kept upright. In <ref>, we show an example context excerpt for CartPole, where a trajectory history is appended with an encoding of the current trajectory.
[
basicstyle=,
backgroundcolor=,
breaklines=true,
caption= Example context format for a CartPole run. A trajectory history (with each trajectory in the format reward: observation, action, observation, action ...) is followed by an encoding of the current trajectory, up to the current observation.,
captionpos=b,
label=lst:cartpole-prompt
]
52: 40 50, 1, 40 54, 2, 41 49, 1, 41 54, 1, ...
60: 45 50, 2, 45 45, 1, 44 50, 2, 44 45, 1, ...
75: 52 50, 1, 52 55, 2, 53 50, 2, 53 46, 2, ...
98: 44 50, 1, 44 55, 2, 45 50,
Below, we discuss some additional considerations for forming the context from the trajectory history.
Context Length. When context length is longer, more trajectories can fit in the context (which yields more in-context “training data" that could potentially be used to generalize to higher rewards, but also requires the LLM to attend over more tokens). Context length is a limiting factor of using current LLMs in our trajectory improvement setting: the number of tokens required to represent a trajectory history scales with the observation dimensionality, action dimensionality, time horizon, and number of trajectories. For our CartPole experiments, we limit the context to 1024 tokens (which is the maximum context length for text-ada-001, text-babbage-001, and text-curie-001 models).
Action Representation. In initial experiments, we found that the tokens used to represent the action space (e.g. “0" for left, “1" for right) can seemingly affect the ability of an LLM to improve trajectories in the online setting. For example, we observed that if “0" is included in the action space, LLMs may “default" to sampling “0" (likely due to token-specific priors). Therefore, for our experiments, we use 1-indexed integer action representations, which appears to alleviate the bias towards choosing a particular action. The fact that action representation can sometimes affect performance complements our observations in the Sequence Transformation section, in which we find that token mapping invariance holds to some extent, but not entirely.
§.§ Clicker Training: Additional Details
In our clicker training example, the observation consists of the end-effector position and the approximate object position as determined by visual input, with the (x,y,z) values normalized between 0 and 300. Actions correspond to movements of the end-effector (normalized between 0 and 100, such that 50,50,50 represents no movement). A sample context is given in <ref>.
[
basicstyle=,
backgroundcolor=,
breaklines=true,
caption= Example context format for clicker training. (Reward, observation, action) tuples are ordered by reward (with a click corresponding to a reward of 1) with an equal number of reward 0 and reward 1 transitions represented in the context.,
captionpos=b,
label=lst:clicker-prompt
]
0: 80,49,138,109,54,133; 45,44,55
0: 82,32,155,109,54,133; 48,59,48
0: 82,32,155,109,54,133; 48,59,48
1: 88,31,154,109,54,133; 45,54,43
1: 85,36,146,109,54,133; 57,54,46
1: 93,40,142,109,54,133; 44,52,43
1: ...
|
http://arxiv.org/abs/2307.04665v1 | 20230710160305 | Peculiarities of beta functions in sigma models | [
"Oleksandr Gamayun",
"Andrei Losev",
"Mikhail Shifman"
] | hep-th | [
"hep-th",
"cond-mat.stat-mech"
] | |
http://arxiv.org/abs/2307.04039v1 | 20230708195157 | A Strong Composition Theorem for Junta Complexity and the Boosting of Property Testers | [
"Guy Blanc",
"Caleb Koch",
"Carmen Strassle",
"Li-Yang Tan"
] | cs.CC | [
"cs.CC",
"cs.DS"
] |
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
We prove a strong composition theorem for junta complexity and show how such theorems can be used to generically boost the performance of property testers.
The ε-approximate junta complexity of a function f is the smallest integer r such that f is ε-close to a function that depends only on r variables. A strong composition theorem states that if f has large ε-approximate junta complexity, then g ∘ f has even larger ε’-approximate junta complexity, even for ε’ ≫ε. We develop a fairly complete understanding of this behavior, proving that the junta complexity of g ∘ f is characterized by that of f along with the multivariate noise sensitivity of g. For the important case of symmetric functions g, we relate their multivariate noise sensitivity to the simpler and well-studied case of univariate noise sensitivity.
We then show how strong composition theorems yield boosting algorithms for property testers: with a strong composition theorem for any class of functions, a large-distance tester for that class is immediately upgraded into one for small distances. Combining our contributions yields a booster for junta testers, and with it new implications for junta testing. This is the first boosting-type result in property testing, and we hope that the connection to composition theorems adds compelling motivation to the study of both topics.
empty
empty
§ INTRODUCTION
The growth in the sizes of modern datasets is both a blessing and a curse. These datasets, many of which now come with billions of features, contain a wealth of information that machine learning algorithms seek to tap into. On the other hand, their size stands in the way of the opportunities they present, as many of the algorithms that we would like to run on them simply cannot handle their dimensionality.
Thankfully, for many tasks of interest the vast majority of features are irrelevant. This motivates the design of algorithms that are able to quickly home in on the small number of relevant features, and whose efficiency scales gracefully with the number of such features. Already in the early 1990s Blum <cit.> (see also <cit.>) proposed the clean theoretical challenge of learning an unknown r-junta, a function that depends on r≪ n many of its n variables. Quoting <cit.>, “It is my belief that some of the most central open problems in computational learning theory are, at their core, questions about finding relevant variables.” This is now known simply as the junta problem and is the subject of intensive study <cit.>, having distinguished itself as “the single most important open question in uniform distribution learning" <cit.>.
The premise of the junta problem suggests an even more basic algorithmic problem, that of determining if an unknown function is even an r-junta to begin with. This is the problem of testing juntas, introduced by Fischer, Kindler, Ron, Safra, and Samorodnitsky <cit.> and subsequently studied in numerous works <cit.>. Junta testers are also at the heart of the best known testers for numerous other classes of functions, the key insight being that many functions are well-approximated by small juntas (see <cit.> and Chapter 5 of <cit.> for more on this connection). The surveys by Blais <cit.> give broad overviews of various junta testers and their applications throughout theoretical computer science.
This work. These algorithmic applications motivate the study of approximability by small juntas as a complexity measure. For a function f : ^n → and a distribution 𝒟 over ^n, the ε-approximate junta complexity of f with respect to 𝒟, denoted J_𝒟(f,ε), is the smallest integer r such that f is ε-close to an r-junta. Among the most basic questions one can ask about any complexity measure of functions is how it behaves under composition. In the first part of this paper we develop, from the ground up, a fairly complete understanding of this question for junta complexity. We prove a near-optimal composition theorem (<Ref>) that is built on notions of noise stability, both classical and new. In the second part we draw a general connection (<Ref>) between the type of composition theorem that we prove—a strong composition theorem, which we will soon define—and property testing, showing how they can be used to design the first generic boosters for property testers. Combining our two main contributions yields new implications for junta testing.
§ OUR RESULTS AND TECHNIQUES
§.§ First main result: A strong composition theorem for junta complexity
Composition theorems are statements about hardness amplification: the goal is to understand the extent to which the disjoint composition (g ∘ f)(x) g(f(x^(1)),…,f(x^(k))) is more complex than f itself, and how this depends on intrinsic properties of the combining function g. For approximate measures such has junta complexity, we are furthermore interested in strong composition theorems, statements of the form:
J_𝒟^k(g∘ f, ε_large)≫ J_𝒟(f, ε_small) even for ε_large≫ε_small.
In words, the composed function requires much more resources—in our case, much larger junta approximators—even if one only seeks a much coarser approximation. Strong composition theorems stand in contrast to weak ones that only amplify hardness with respect to one of the two parameters, either resources or approximation quality only. The canonical example in this context is Yao’s XOR lemma <cit.>, which says that if f is mildly hard to approximate with size-s circuits, then XOR∘ f is extremely hard to approximate with size-s’ circuits. A long-recognized downside of this important result, inherent to all known proofs of it <cit.> and its generalizations to arbitrary combining functions <cit.>, is the fact that it is only known to hold for s’ ≪ s, whereas intuitively it should hold even for s’ ≫ s.
Composition theorems, both weak and strong, have been studied for a variety of complexity measures
but appear to have been underexplored for junta complexity. One reason may be that the question appears deceptively simple. Indeed, things are completely straightforward in the zero-error setting, where we have the intuitive identity J(g ∘ f, 0) = J(g,0)· J(f,0). However, we show that the question becomes surprisingly intricate once error is allowed.
§.§.§ Context and motivation: Counterexamples to natural composition theorems
The question proves to be tricky even in the special case where the combining function g is symmetric. We now state a sequence of three seemingly intuitive conjectures for this special case. While false, these conjectures and their counterexamples will motivate and lead us to the statement of our actual composition theorem. (Details and proofs of the counterexamples discussed in this section are given in <Ref>.)
The following notation will be useful for us throughout this paper:
Notation. For a function f : ^n→, distribution 𝒟 over ^n, and integer r, we write f̃_𝒟,r to denote the best r-junta approximator of f with respect to 𝒟. When 𝒟 is clear from context, we simply write f̃_r.
Conjecture 1. It will be convenient for us to consider composition theorems in their contrapositive form. Suppose we would like to approximate g ∘ f with an R-junta, say with respect to the uniform distribution. If g is a k-variable symmetric function, how would we go about constructing an approximator that achieves the highest accuracy possible? Since g is symmetric, one may be inclined to divide the “junta budget” of R evenly among the k inner functions and conjecture that
g ∘f̃_R/k = g(f̃_R/k,…,f̃_R/k)
achieves the best, or close to the best, accuracy among all R-junta approximators.
However, this is badly false. Let g be the k-variable Majority function and f the n-variable Parity function. For any choice of R satisfying R/k < n (i.e. each inner Parity receiving a budget that falls short of its arity), we have Pr[g∘f̃_R/k g∘ f] = 1/2. This is because it is “all or nothing” when it comes to approximating Parity: no (n-1)-junta can achieve accuracy better than that of a constant approximator. The best strategy is therefore to allocate a full budget of n to as many of the inner Parities as possible (i.e. R/n many of them), and a budget of zero to the others. This shows a gap of 1/2 versus 1-o(1) in the accuracies of the “divide budget equally” strategy and the optimal one.
Conjecture 2. In light of this counterexample, one may then conjecture that the best strategy is to partition the junta budget optimally among the k inner functions and feed the respective approximators of f into g. That is, the conjecture is that the best approximator is of the form:
g(f̃_r_1,…,f̃_r_k) where ∑_i=1^k r_i = R.
While this is true for our example above, it is again badly false in general. In fact, the error of such an approximator can be close to 1, even worse than the trivial bound of ≤1/2 achievable with a constant approximator.
Our counterexample reveals another counterintuitive aspect of the overall problem. Consider an approximator for g∘ f of the form g(f̃_r_1,…,f̃_r_k). We show its approximation accuracy can increase if we replace one of the inner approximators for f with a worse one: e.g. if we replace f̃_r_1 with f̃_r_1’ where r_1’ < r_1. In more technical terms that we will soon define: while the noise stability of a function is, as one would expect, monotone in the noise rate, we show that the natural generalization of it where the corruption probabilities of 0’s and 1’s are decoupled (defined in <Ref>) is not monotone.
Conjecture 3. Finally, we consider a conjecture that is far laxer than either of the previous ones. It simply states that the optimal approximator for the composed function g∘ f is one of composed form:
h(q^(1),…,q^(k)) for some h : ^k → and q^(1),…,q^(k) : ^n →,
where the relevant variables of q^(i) fall within the ith block of variables.
We show (to our own surprise) that this conjecture is still false: there are composed functions for which the optimal approximator is not of composed form. However, unlike the first two conjectures, our work shows that this conjecture is morally true in a precise sense.
§.§.§ Our Strong Composition Theorem
Our strong composition theorem implies a close quantitative relationship between the error of the optimal approximator and that of the optimal composed form approximator, and indeed one with a specific structure that we call canonical:
We say that a composed form approximator for g∘ f is canonical if it is of the form:
h(f̃_r_1,…,f̃_r_k),
where h : ^k→ is the function:
h(y) = (_∼𝒟^k[ (g∘ f)()|y_i = f̃_r_i(^(i)) for all i∈ [k]]).
For intuition regarding the choice of h, we note that for the fixed k-tuple of functions f̃_r_1,…,f̃_r_k, it is the combining function that minimizes error with respect to g∘ f.
Canonical composed form approximators are therefore ones whose individual components are “locally" optimal: each f̃_r_i is the optimal r_i-junta approximator for f, and h the optimal way of combining the f_r_i's. Our strong composition theorem will say that we can get very close to the globally optimal approximator this way.
The notion of noise stability is central to our work:
For any μ∈ (-1,1) and vector ρ⃗∈ [0,1]^k, we define the multivariate noise stability of g as
_μ,ρ⃗(g) = [g()g()]
where independently for each i ∈ [k], we draw (_i, _i) as follows: Using π_μ to denote the unique distribution supported on with mean μ, _i ∼π_μ, and
_i = _i w.p. ρ⃗_i
Independent draw from π_μ w.p. 1 - ρ⃗_i.
When μ = 0 we simply write _ρ⃗(g).
This definition allows for a different noise rate for each coordinate, generalizing the more commonly studied definition where the noise rates are the same for every coordinate (see e.g. Chapter 2 of <cit.>). We use the terms multivariate noise stability and univariate noise stability to distinguish these definitions. Even in the case of symmetric combining functions g, our strong composition theorem will naturally involve its multivariate noise stability (necessarily so, as already suggested by the counterexample to Conjecture 1).
We present our strong composition theorem as a sequence of two parts that each carries a standalone message, the first of which formalizes the fact that the optimal canonical composed form approximator is a good proxy for the actual optimal approximator. It will be more convenient for us to state our results in terms of advantage instead of error, the two quantities being related via the identity advantage = 1-2·error. Also, for notational clarity we only state here the special case where f is balanced (i.e. _𝒟[f] = 0).
[colback = white,arc=1mm, boxrule=0.25mm]
Let f : ^n→ and g:^k → be arbitrary functions and 𝒟 be any distribution over ^n. Assume that _𝒟[f]=0. For the task of approximating g ∘ f under 𝒟^k with an R-junta, there is a correlation vector ρ⃗∈ [0,1]^k such that
_ρ⃗(g)^2 ≤Advantage of optimal canonical composed form approximator
≤Advantage of optimal approximator≤√(_ρ⃗(g)).
For most applications of composition theorems, including those in this paper, the parameters of interest are such that the quartic gap between the upper and lower bounds above are inconsequential. (In particular, if the advantage of the optimal canonical composed form approximator diminishes to 0 as k grows, our bounds imply that the same is true for the actual optimal approximator. Indeed, the two rates of convergence are the same up to a polynomial factor.)
Part II of <Ref> elaborates on the correlation vector ρ⃗, showing how it is is determined by the junta complexity of f and the noise stability of g:
[colback = white,arc=1mm, boxrule=0.25mm]
Theorem 1 (Part II: Explicit description of ρ⃗). The correlation vector ρ⃗∈ [0,1]^k in Part I is the vector that maximizes _ρ⃗(g), subject to the constraint:
ρ⃗_i = _𝒟[f·f̃_r_i] for all i∈ [k] where ∑_i=1^k r_i = R.
Taken together, the two parts of <Ref> show that the junta complexity of g∘ f is tightly characterized by the junta complexity of f and the multivariate noise stability of g. It furthermore gives a simple and explicit strategy for constructing a near-optimal approximator: first partition the junta budget optimally among the k inner functions; next approximate each inner function optimally with its allocated budget; and finally combine these approximators in the optimal way.
Naturally, it would be preferable to understand the strategy for constructing the actual optimal approximator, but our counterexamples suggest that it defies a clean and interpretable description even for symmetric g (indeed, even for g being the And function).
Corollary: Highly noise sensitive functions strongly amplify junta complexity. <Ref> yields a hardness amplification statement of the form <ref> the following way. Suppose f is mildly hard for r-juntas, i.e. [f̃_r f] ≥ε_small. Our goal is to show that g ∘ f is extremely hard for R-juntas, [(g∘ f)_R g∘ f] _large≫ε_small, even for R ≫ r. For any partition of R = ∑_i=1^k r_i, at most a 0.999-fraction of the r_i's exceed 1.01R/ k r. <Ref> therefore tells us that the advantage of the optimal R-junta is upper bounded by
√(_ρ⃗(g)) where at least a 0.001-fraction of ρ⃗'s coordinates are at most 1-2·_small.
(Equivalently, at least a 0.001-fraction of coordinates receive at least an _small amount of noise.)
This motivates the following definition:
The (δ,)-noise stability of a function g:^k→ is the quantity
max{_ρ⃗(g)at least a δ-fraction of ρ⃗'s coordinates are at most 1-2}.
By the monotonicity of noise stability, this maximum is achieved by a ρ⃗ with exactly a δ-fraction of coordinates being exactly 1-2, and the remaining (1-δ)-fraction being 1.
We have sketched the following corollary of <Ref>:
Let g : ^k → be a function whose (1/2,_small)-noise stability is at most τ. Then for all functions f,
J_𝒟^k(g∘ f, 12(1-√(τ)))__≥ 0.99k·J_𝒟(f,_small)__.
In words, g ∘ f requires much larger junta approximators, an Ω(k) multiplicative factor more, even if we allow much larger error, 1/2(1- √(τ)) _large instead of _small. As two extreme examples of combining functions g,
∘ The (0.001,_small)-noise stability of the k-variable Parity function is (1-2·_small)^Ω(k), making it an excellent amplifier of junta complexity.
∘ The (0.001,_small)-noise stability of a dictator function g(x) = x_i is 1, making it a terrible amplifier of junta complexity as one would expect: if g is a dictator function then g∘ f ≡ f is of course no more complex than f itself.
The partial-noise stability of these two specific examples are straightforward to compute, but the calculations quickly become unwieldy even for other basic functions. In addition to being a quantity of independent technical interest, the upcoming connections between strong composition theorems and the boosting of property testers will also motivate understanding the partial-noise stability of broad classes of functions beyond just parity and dictator. (Roughly speaking, to boost testers for a property 𝒫 we need to analyze a function g such that 𝒫 is closed under g.)
Our next result is a general technique that yields sharp bounds on the partial-noise stability, and more generally the multivariate noise stability, of all symmetric functions.
The multivariate noise sensitivity of symmetric functions. For a symmetric function g : ^k → one intuits that its multivariate noise stability at a vector ρ⃗∈ [0,1]^k should be related to its univariate noise stability at a value ρ^⋆∈ [0,1] that is an “average" of the coordinates of ρ⃗. (This is certainly not true for general functions; consider for example the dictator function.) Using techniques from the study of negative association, we formalize this intuition and prove that indeed it is sandwiched by the arithmetic and geometric means of the coordinates of ρ⃗:
Let g : ^k→ be a symmetric function, μ∈ (-1,1), and ρ⃗∈ [0,1]^k. Define
(∏_i ∈ [k]ρ⃗_i)^1/k and 1/k∑_i ∈ [k]ρ⃗_i.
Then
_μ,(g) ≤_μ,ρ⃗(g) ≤_μ,(g).
Furthermore, the lower bound holds under the weaker assumption that g is transitive.
The more “reasonable" ρ⃗ is, the closer the upper and lower bounds of <Ref> are. In particular, we get the following bound on the (δ,)-noise stability of symmetric functions:
For any symmetric function g:^k →, δ∈ (0,1), and ∈ (0,1/2), the (δ, )-noise stability of g is equal to _μ, ρ^⋆(g) for some ρ^⋆∈ [0,1] satisfying
1 - 2δ - O(^2) ≤ρ^⋆≤ 1 - 2δ.
Recall that corresponds to the initial inapproximability factor _small in <Ref>, and so the additive gap of O(^2) between the upper and lower bounds is indeed small for our intended application.
§.§ Second main result: Composition theorems and boosting of property testers
Composition theorems are most naturally thought of as statements about hardness amplification, and indeed that is how they are most commonly used. As our second main contribution, we show how they can be used fruitfully in their contrapositive form as meta-algorithms. In more detail, we show how they can be used to generically boost the performance guarantees of property testers. While boosting is a story of success in both the theory and practice of machine learning, to our knowledge the analogous concept in property testing has not yet been considered. The connection that we draw can be instantiated with either strong or weak composition theorems, but as we now see, the parameters are qualitatively better in case of strong composition theorems.
Within property testing, a major strand of research, initiated by Parnas, Ron, and Samorodnitsky <cit.>, concerns testing whether an unknown function has a concise representation. Consider any parameterized property 𝒫 = {𝒫_s}_s ∈ℕ of boolean functions: size-s parities, size-s juntas, size-s decision trees, s-sparse polynomials over various fields, and so on. The task is as follows:
Given queries to an unknown function f : ^n →, access to i.i.d. draws from a distribution 𝒟, and parameters s,s'∈ and > 0, distinguish between:
∘ Yes: f ∈𝒫_s
∘ No: f is ε-far under 𝒟 from every function in 𝒫_s'.
Note that the task is more challenging as ε gets smaller, and as the gap between s and s' gets smaller. We show how a composition theorem for 𝒫 allows one to trade off these two parameters: a tester for large ε can be upgraded into one for small ε, at the price of larger gap between s and s'. The stronger the composition theorem, the more favorable this tradeoff is, and with an optimally strong composition theorem one is able to improve the ε-dependence without any associated price in the multiplicative gap between s and s':
[colback = white,arc=1mm, boxrule=0.25mm]
Let 𝒫 = {𝒫_s }_s∈ be a property and g : ^k→ be such that 𝒫 behaves linearly w.r.t. g. Suppose that 𝒫 admits an (_small, _large,λ)-composition theorem w.r.t. g. Then any (_large,ks,λ ks')-tester for 𝒫 can be converted in to an (_small, s,s')-tester for 𝒫.
We defer the precise definitions of the terms “(_small,_large,λ)-composition theorem" and “behaves linearly" to the body of the paper, mentioning for now that λ∈ [0,1] measures the strength of the composition theorem: such a theorem says that the composed function requires λ k more resources to achieve _large error than original function to achieve _small error. Therefore λ = 1/k can be viewed as the threshold separating weak and strong composition theorems, with λ = 1 corresponding to an optimally strong one. (<Ref>, for example, achieves λ = 0.99.) Note that if λ = 1 in <Ref>, then an (_large,s,s)-tester for all s yields an (_small,s,s)-tester for all s.
The formal version of <Ref> will also show that it upgrades uniform-distribution testers to strong uniform-distribution testers, and distribution-free testers to strong distribution-free testers. This stands in contrast to standard boosting in learning which can only upgrade distribution-free learners.
§.§.§ Example applications of <Ref>: New implications for junta testing
As mentioned in the introduction, juntas are among the most basic and intensively-studied function classes in property testing. Owing to two decades of research, the complexity of testing juntas in the non-tolerant setting is now fairly well-understood: we have highly-efficient adaptive <cit.>, non-adaptive <cit.>, and distribution-free testers <cit.>, all of them achieving query complexities that are essentially optimal <cit.>.
The picture is much less clear in the more challenging tolerant setting. For the uniform distribution, the best known testers require exponentially many queries <cit.>, and there are no known distribution-free testers. By generalization <Ref> to the tolerant setting and instantiating it with our strong composition theorem for juntas, we obtain new implications, both positive and negative, that help clarify this picture.
Positive implication: boosting of tolerant junta testers. First, any tolerant junta tester for large distance parameter can now be converted into one for small distance parameters, at the price of a slight gap in the junta sizes of the Yes and No cases. For example, for both the uniform and distribution-free settings we get:
Suppose we have a (r)-query tester that distinguishes between
∘ Yes: f is 1/4-close to an r-junta
∘ No: f is 1/3-far from every r-junta.
Then for every > 0 we have a (r/)-query tester that distinguishes between
∘ Yes: f is -close to an r-junta
∘ No: f is Ω()-far from every 1.001r-junta.
The resulting gap between the junta sizes of the Yes and No cases, while mild, is admittedly not ideal. As alluded to above, this stems from the fact that the “strength parameter" of <Ref> is λ = 0.99 and not λ = 1. Designing boosters that do not incur this gap, either via an optimally strong composition theorem or otherwise, is a natural avenue for future work.
On the other hand, we now show that even with this gap, <Ref> already carries with it an interesting consequence. This consequence crucially relies on our composition theorem for juntas being strong; the proof would not have gone through had the strength parameter of <Ref> only been λ = 1/k.
Negative implication: NP-hardness in the distribution-free setting. This implication concerns the time rather than query complexity of testers. The same proof of <Ref> also converts a (r,n)-time tester into a (r,1/,n)-time tester. Implicit in the work of Hancock, Jiang, Li, and Tromp <cit.> is an NP-hardness result for tolerantly testing juntas in the distribution-free setting. One downside of their result is that it only holds in the regime of = 1/(n). Applying the time-analogue of <Ref>, we lift this hardness up to the standard regime of constant :
The following task is NP-hard under randomized reductions. Given queries to a function f : ^n→, access to i.i.d. draws from a distribution 𝒟, and parameters r∈ and > 0, distinguish between:
∘ Yes: f is 1/4-close under 𝒟 to an r-junta;
∘ No: f is 1/3-far under 𝒟 from every r-junta.
This implies a fairly dramatic separation between the non-tolerant versus tolerant versions of the problem. The recent (r)-query non-tolerant testers <cit.> are also time efficient, running in (r,n) time. <Ref> shows that any tolerant tester, regardless of query efficiency, must have time complexity that is as bad as that of SAT: e.g. if SAT requires randomized exponential time, then so does any tolerant tester.
In fact, our actual result is stronger than as stated in <Ref>: we prove that the task is NP-hard even if the Yes case states that f is 0-close under 𝒟 to an r-junta. We therefore show that the testers of <cit.> are quite fragile in the sense that they break if the Yes case in the definition of non-tolerant testing is changed from “f is an r-junta" to “f is 0-close under 𝒟 to an r-junta".
§ OTHER RELATED WORK
O'Donnell's generalization of Yao's XOR lemma.
Yao's XOR lemma states that if f is -hard against circuits of size s, meaning every size-s circuit differs from f on at least an -fraction of inputs, then XOR_k∘ f is (1/2 + 1/2(1-2)^k + δ)-hard against circuits of size s' where
s'= Θ(δ^2/log(1/))· s.
The (1-2)^k term in the resulting inapproximability factor agrees precisely with the (univariate) noise stability of XOR_k at ρ = 1-2. In <cit.> O'Donnell showed that this is no coincidence. He proved a far-reaching generalization of Yao's XOR lemma that allows for an arbitrary combining function g : ^k → instead of XOR, and showed that the resulting inapproximability of g∘ f is given by the “expected bias" of g, a quantity that is closely related to the (univariate) noise stability of g.
Like Yao's XOR lemma, <cit.>'s composition theorem is weak in the sense that the hardness of g∘ f only holds against size s' circuits where s' ≪ s. (In fact, <cit.> incurs an additional multiplicative loss of k in the resulting circuit size.) Our composition theorem concerns a different resource, juntas instead of circuits, and as emphasized in the introduction, our main focus is on proving a composition theorem that is strong in the sense of amplifying both the amount of resource required and the inapproximability factor.
Both our work and <cit.> utilize Fourier analysis in our proofs, which is to be expected given the centrality of noise stability to both works. That aside, our overall approach and techniques are entirely different from <cit.>'s—necessarily so, as we elaborate next.
Hardness amplification via boosting.
In <cit.> Klivans and Servedio observed that most known hardness amplification results are proved via a boosting-type argument. For example, for Yao's XOR lemma and <cit.>'s generalization of it, one proceeds by contradiction: one assumes that XOR_k∘ f can be mildly approximated by a size-s' circuit C (in the language of boosting, C is a weak hypothesis for XOR_k ∘ f), and one constructs a larger circuit C^⋆ of size s that well-approximates f (i.e. C^⋆ is a strong hypothesis for f). In boosting, the strong hypothesis is built out of many weak hypotheses; likewise, in Yao's XOR lemma the size-s circuit C^⋆ is built out of many size-s' circuits that are like C. The work of <cit.> formalizes this connection.
From this perspective, it becomes clear why such approaches are fundamentally limited to weak composition theorems where s' ≪ s. Strong composition theorems therefore necessitate a different tack, and indeed our proof proceeds via the forward implication instead of the contrapositive: we reason directly about the inapproximability of g∘ f under the assumption about the inapproximability of f. Somewhat ironically, our second main contribution is then an application of strong composition theorems to the boosting of property testers, which goes in the opposite direction to <cit.>'s “Boosting ⇒ Hardness Amplification" observation above.
Independent work of Chen and Patel <cit.>. A recent work of Chen and Patel also gives new lower bounds for tolerant junta testing. For the problem of testing whether an unknown function is _1-close to or _2-far from a k-junta under the uniform distribution, they prove a query lower bound of k^Ω(log(1/(_2-_1))), which is superpolynomial when the gap _2-_1 is subconstant. This yields the first superpolynomial query complexity separation between tolerant and non-tolerant testing for a natural property of boolean functions.
Their result is incomparable to <Ref> in several respects. We give a time lower bound when the gap _2-_1 is a fixed constant in the distribution-free setting. Being an NP-hardness result, our lower bound is conditional whereas theirs is unconditional.
§ DISCUSSION AND FUTURE WORK
Complexity measures can behave in highly counterintuitive ways under composition, which makes composition theorems, and strong composition theorems in particular, tricky to prove.
A motivating goal of this work is to develop an understanding of strong composition theorems from first principles, and hence our focus on junta complexity, perhaps the most basic complexity measure of a function. We are optimistic that our techniques can apply to other measures, though we believe that as in this work, much of the challenge will lie in first figuring out the right statement to prove.
Consider for example decision tree complexity, a natural next step from junta complexity. There are existing strong XOR lemmas for decision tree complexity, but they come with limitations and do not appear to be the final word. (Briefly, the XOR lemma of <cit.> is only strong when the initial inapproximability factor _small is at least a constant, and the strong XOR lemma of <cit.> only holds for decision trees that are allowed to “abort".) Indeed, Shaltiel <cit.> has shown that certain hoped-for strong XOR lemmas for decision tree complexity are false, though as he remarked, his counterexample “seems to exploit defects in the formation of the problem rather than show that our general intuition for direct product assertions is false". We hope that our results, and specifically the new connections to various notions of noise stability, can serve as a guide to the right statement for decision tree complexity and other measures.
As for our second main result, the general connection between strong composition theorems and the boosting of property testers, we believe that it adds compelling algorithmic motivation to the study of composition theorems, a topic traditionally considered to be mostly of complexity-theoretic interest. Likewise, we hope that our work spurs future research on this new notion of boosting for property testers, a notion that we believe is of interest independent of the connections to composition theorems. For example, an ambitious goal for future work is to broadly understand when and how a tester for constant distance parameter can be automatically upgraded into one with the optimal -dependence, as well as the associated costs of such a transformation.
§ PRELIMINARIES
Distributions and random variables. We use bold font (e.g ∼) to denote random variables.
For any set S, we use ∼ S as shorthand for ∼Unif(S) where Unif(·) denotes the uniform distribution. Of particular importance to this work will be μ-biased distributions over the Boolean hypercube.
For any μ∈ (-1,1), we use π_μ to denote the unique distribution over with mean μ. Formally, for ∼π_μ,
=
1 with probability 1 + μ/2
-1 with probability 1 - μ/2.
Similarly, for ∈ [-1,1]^k, we use π_ to denote the product distribution π__1×⋯×π__k.
Fix some bias μ∈ (-1,1). For any ∈ [0,1]^k and y ∈^k, we write y to denote that for each i ∈ [k], _i is independently drawn as
_i =
y_i with probability _i
Drawn from π_μ with probability 1 - _i.
Whenever we use the above notation, the choice of μ will be clear from context. This gives the following more succinct way to express <Ref>, defining multivariate noise stability,
_μ,(g) _∼ (π_μ)^k,[g()g()].
Some useful sets. For any integers a ≤ b, we use [a,b] as shorthand for the set {a, a+1, …, b}. Similarly, for b ≥ 1, we use [b] as shorthand for the set [1,b]. For any set S and ℓ≤ |S|, we use Sℓ to denote all subsets of S with cardinality ℓ.
Junta complexity. For any function f: ^n →, and S ⊆ [n], we say that f is an S-junta if for all x,y ∈^n for which x_i = y_i whenever i ∈ S it holds that f(x) = f(y). With a slight abuse of notation, when r ∈ [n] is an integer, we say that f is an r-junta if there is a set |S| ≤ r for which f is an r-junta.
Advantage.
For any functions f, g:^n → and distribution over ^n, we define
_(f,g) _∼[f() g()].
With a slight abuse of notation, we define for f:^n → and S ⊆ [n],
_(f,S) max_S-junta g:^n →_(f,g).
Similarly, for r ∈ [n],
_(f,r) max_r-junta g:^n →_(f,g).
When the base distribution is clear, we will drop it from our notation. Furthermore, for any function f:^n → and S ⊆ [n] or r ∈ [n], we use f̃_S and f̃_r to denote the S-junta and r-junta respectively maximizing the above two advantages.
Function composition.
For a function f: ^n →, its direct product f^⊗ k:*^n^k→^k is defined as
f^⊗ k(x^(1), …, x^(k)) = (f(x^(1)), …, f(x^(k))).
For any g:^k →, we use g ∘ f:*^n^k→ as shorthand for g∘ f^⊗ k, meaning,
(g∘ f)(x^(1), …, x^(k)) = g(f(x^(1)), …, f(x^(k))).
Vector powers. For any vector v ∈^k and set S ⊆ [k], we'll use the notation v^S as shorthand for
v^S ∏_i ∈ S v_i.
§.§ Fourier Analysis
Our proof of <Ref> will make heavy use of Fourier analysis over the μ-biased hypercube, (π_μ)^k. In this section, we will review relevant definitions and facts. A more complete exposition is given in <cit.>.
For any μ∈ (-1,1), we define ϕ_μ(x) x-μ/σ where σ√(1 - μ^2). Every g: ^k → can be uniquely decomposed as
g(y) = ∑_S ⊆ [k]ĝ_μ(S) ∏_i ∈ Sϕ_μ(y_i) where ĝ_μ(S) = _∼ (π_μ)^k*g() ∏_i ∈ Sϕ_μ(_i).
This decomposition has a number of useful properties stemming from the fact that transforming g from its representation as a truth table to its Fourier coefficients ĝ_μ(S) is an orthonormal transformation.
[Basic facts about the Fourier decomposition]
* Plancherel's theorem: For any g, h: ^k → and μ∈ (-1,1),
_∼ (π_μ)^k[g()h()] = ∑_S ⊆ [k]ĝ_μ(S)ĥ_μ(S).
* Parseval's theorem: For any g: ^k → and μ∈ (-1,1),
_∼ (π_μ)^k[g()^2] = ∑_S ⊆ [k]ĝ_μ(S)^2.
In particular, when g has a range of , Parseval's theorem guarantees that the sum of its squared Fourier coefficients is 1. As a result, the following distribution is well defined.
For any g: ^k → and bias μ∈ (-1,1), the spectral sample of g, denoted _μ(g), is the probably distribution over subsets of [k] in which the set S has probability ĝ_μ(S)^2.
The Fourier decomposition gives a concise way to represent important quantities, as in the following results.
For any μ∈ (-1,1) and ∈ [0,1]^k, _μ, can be related to g's μ-biased Fourier decomposition as,
_μ, (g) = ∑_S ⊆ [k]ĝ(S)^2 ^S = _∼_μ(g)[()^].
We define g^()(y) _ y[g()]. Then, by Plancherel's theorem,
_μ, (g) = _∼ (π_μ)^k[g() g^()()] = ∑_S ⊆ [k]g_μ(S) g^()_μ(S).
Next, we compute the Fourier decomposition of g^().
g^()_μ(S) = _∼ (π_μ)^k*g^()() ∏_i ∈ Sϕ_μ(_i)
= _∼ (π_μ)^k, *g() ∏_i ∈ Sϕ_μ(_i)
= _∼ (π_μ)^k, *g() ∏_i ∈ Sϕ_μ(_i)(,) distributed identically to (, )
= _∼ (π_μ)^k*g() ·_*∏_i ∈ Sϕ_μ(_i).
Applying the independence of _1, …, _k conditioned on and that [ϕ_μ(_i)] = _i ϕ_μ(_i),
g^()_μ(S) = _∼ (π_μ)^k*g() ·∏_i ∈ S_i ϕ_μ(_i)
= ()^S ·_∼ (π_μ)^k*g() ·∏_i ∈ Sϕ_μ(_i) = ()^S g_μ(S).
Putting the above together,
_μ, (g) = ∑_S ⊆ [k]ĝ_μ(S)^2 ()^S.
One immediate corollary of the above is that multivariate noise stability is monotone.
For any μ∈ (-1,1), g:^k →, and , ρ⃗'⃗∈ [0,1]^k satisfying _i ≤ρ⃗'⃗_i for all i ∈ [k],
_μ, (g) ≤_μ, ρ⃗'⃗(g).
Recall that for any ν∈ [-1,1]^k, the distribution π_ν is the unique product distribution supported on ^k with mean ν. The Fourier decomposition of g also gives a useful way to compute _∼π_ν[g()].
For any g: ^k →, μ∈ (-1,1), and ν∈ [-1,1]^k,
_∼π_ν[g()] = ∑_S ⊆ [k]ĝ_μ(S) ∏_i ∈ Sϕ_μ(ν_i).
We expand g into it's Fourier decomposition
[g()] = ∑_S ⊆ [k]ĝ_μ(S) *∏_i ∈ Sϕ_μ(_i)Linearity of expectation
= ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ S*ϕ_μ(_i)_1, …, _k are independent
= ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ S*_i - μ/σDefinition of ϕ_μ
= ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ Sϕ_μ(ν_i). Linearity of expectation
§ A STRONG COMPOSITION THEOREM FOR JUNTAS
In this section, we characterize the junta size required to approximate g ∘ f in terms of the multivariate noise stability of g, and the junta size required to approximate f.
For any g: ^k →, f: ^n → and base distribution over ^n, let μ = _∼[f()].
* Lower bound on advantage: For any approximators q^(1), …, q^(k): ^n →, define the lower normalized correlations, for each i ∈ [k] as
α_i max*0, _(f, q^(i))^2 - μ^2/1 - μ^2.
Then, there is an h:^k → for which
_^k(g∘ f, h (q^(1), …, q^(k))) ≥_μ, α(g).
* Upper bound on advantage: For any S_1,…, S_k, define the upper normalized correlation as
β_i max*0,_(f, S_i) - μ^2/1 - μ^2,
construct S ⊆ [n] × [k] by taking S_1 from the first block, S_2 from the second block, and so on (formally S ∪_i ∈ [k], j ∈ S_i{(j,i)}). Then,
_^k(g∘ f, S) ≤√(_μ, β(g)).
Our goal is to understand the error of the best R-junta approximating g ∘ f. <Ref> says that for any way to partition R = r_1 + ⋯ r_k, the approximator h (f̃_r_1, …, f̃_r_k) achieves nearly optimal advantage across all R-juntas that partition their budget this way. Of course, by maximizing both sides across all partitions, we can conclude that there is some partitioning and function h for which h (f̃_r_1, …, f̃_r_k) has nearly optimal advantage among all R-juntas. Indeed, as a simple corollary of <Ref>, we can show that the error of the optimal canonical composed form approximator is within a factor of 4 of the optimal approximator. Recall that _(q_1,q_2) = _∼[q_1() ≠ q_2()] and is related to advantage via the equality = 1 - 2·.
For any g: ^k →, f:^n →, junta budget R, and base distribution , there is an h:^n → and partition of the budget r_1 + ⋯ + r_k = R for which,.
_^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≤ 4 ·_^k(g∘ f, R).
When μ = 0, the guarantee of <Ref> can further be given in the concise form of <Ref>: For an appropriately chosen ∈ [0,1]^k,
_ρ⃗(g)^2 ≤Advantage of optimal canonical composed form approximator
≤Advantage of optimal approximator≤√(_ρ⃗(g)).
We include the proofs of <Ref> and <Ref> in <Ref>.
§.§ Proof of the lower bound on advantage
In this subsection, we show that (x_1, …, x_k) → h(f̃_r_1(x_1), …, f̃_r_k(x_k)) is close to the best R-junta approximator for g ∘ f. Here, the function h can be different than g, and this is necessary as shown in the counterexample to conjecture 2 in <Ref>.
For any g:^k →, f:^n →, and approximators q^(1), …, q^(k), there is some h:^k → for which
_^k(g∘ f, h ∘ (q^(1), …, q^(k))) ≥_μ, α(g),
where μ = _∼[f()] and for each i ∈ [k],
α_i max*0, (f, q^(i))^2 - μ^2/1 - μ^2.
Note α_i naturally interpolates between 0 and 1. Setting q^(i) to the better of the constant -1 or the constant +1 function will lead to α_i = 0, while setting q^(i) = f gives α_i = 1.
§.§.§ Characterizing the advantage of composed form approximators
To ease notation, we begin with a simpler setting. Suppose we use the same budget, r R/k, in each of the k pieces. Our goal is to understand
max_h:^k →(g∘ f, h∘f̃_r)
in terms of the noise sensitivity of g and (f, f̃_r). To do so, we will consider unbalanced noise stability.
For any x ∈^k, we use the notation x to denote that for each i ∈ [k], _i is independently drawn as
* If x_i = -1, with probability a, we set _i = x_i and otherwise set _i = -x_i
* If x_i = 1, with probability b, we set _i = x_i and otherwise set _i = -x_i.
For any g,h:^k →, μ∈ [-1,1] and a,b ∈ [0,1], we define the unbalanced noise stability as
_μ, (a,b)(g,h) = _∼ (π_μ)^k, [g()h()].
We refer to the above notion as unbalanced because when drawing x, the probability of the i^th coordinate flipping from -1 to 1 and from 1 to -1 may differ. Unbalanced noise stability is useful in our setting due to the following proposition.
For any f, f̃: ^n → and g,h:^k →,
_∼^k[(g ∘ f)() · (h ∘f̃)()] = _μ, (a,b)(g,h),
where
μ_∼[f()],
a _∼[f̃() = -1 | f() = -1],
b _∼[f̃() = 1 | f() = 1].
Draw ∼^k and then define f^⊗ k(), f̃^⊗ k(). Clearly,
_∼^k[(g ∘ f)() · (h ∘f̃)()] = [g() h()].
Furthermore, the distribution of , is equivalent to if we drew ∼ (π_μ)^k,. The above quantity therefore matches the definition of _μ, (a,b)(g,h).
§.§.§ Unbalanced noise stability behaves strangely
The most basic requirement of our approximation for g ∘ f is that it have advantage at least 0, as either the constant -1 or the constant +1 function is guaranteed to have such an advantage. Indeed, in the balanced case, it is well known that the approximation will satisfy this basic requirement even if we take h = g.
For any g:^k → and a ∈ [0,1/2],
_0, (a,a)(g,g) ≥ 0.
However, in the unbalanced case, this basic requirement no longer holds.
For any k ≥ 0, and a,b ∈ [0,1] for which |a-b| ≥ 0.01, there is a function g:^k → for which
_0, (a,b)(g,g) ≤ -(1-2^-Ω(k)).
Without loss of generality, we assume b ≥ a + 0.01. We define
g(x)
1 if ∑_i ∈ [k]x_i ≥ 0.005k,
-1 otherwise.
Draw ∼ (π_μ)^k,. Then,
*∑_i ∈ [k]_i = 0 , *∑_i ∈ [k]_i = k(b-a).
Furthermore, a standard application of Hoeffding's inequality implies that
[g() = 1] ≤ 2^-Ω(k) , [g() = -1]≤ 2^-Ω(k).
By union bound, with probability at least 2^-Ω(k), we have that both g() = -1 and g() = 1. This implies the desired result.
§.§.§ Unbalanced noise stability behaves well if we use the best h
Surprisingly, we show that if we use the best h, our approximation does meet this most basic requirement. Furthermore, we can relate it to the classical notion of balanced noise stability. The below Lemma directly implies <Ref>.
For any g:^k → and distribution over , each in ^k satisfying,
* The pairs (_1, _1), …, (_k, _k) are independent of one another.
* The means satisfy [_1] = ⋯ = [_k] = μ.
Define the correlations α_1, …, α_k as
α_i max*0,[_i _i]^2 - μ^2/1 - μ^2.
Then, there is an h:^k → for which
[g()h()] ≥_μ, α(g).
Comparing to <Ref>, if μ = 0, then α_i = max(0,1-a-b) for all i ∈ [k]. Since _μ, α(g) ≥ 0 whenever α≥ 0, <Ref> shows that the phenomenon in <Ref> cannot occur if we use the best approximator h.
The following Lemma will be useful in the proof of <Ref>.
For any function g: ^k →, let _1, …, _k be independent random variables each with mean μ and supported on [-1,1]. Then,
_*_∼π_[g()]^2 = _μ, ([ϕ_μ(_1)^2],
…, [ϕ_μ(_k)^2])(g).
We'll use the μ-biased Fourier expansion of g. Applying <Ref>,
_*_∼π_[g()]^2 = _**∑_S ⊆ [k]ĝ(S) ∏_i ∈ Sϕ_μ(_i)^2
= ∑_S_1, S_2 ⊆ [k]ĝ(S_1)ĝ(S_2)*∏_i ∈ S_1ϕ_μ(_i)∏_i ∈ S_2ϕ_μ(_i).
We claim that, in the above sum, any term in which S_1 ≠ S_2 is equal to 0. Let S_1 S_2 denote the symmetric difference of S_1 and S_2. Then, due to the independence of _1, …, _k,
*∏_i ∈ S_1ϕ_μ(_i)∏_i ∈ S_2ϕ_μ(_i) = ∏_i ∈ S_1 ∩ S_2[ϕ_μ(_i)^2] ∏_i ∈ S_1 S_2[ϕ_μ(_i)].
Since the mean of _i is μ, [ϕ_μ(_i)] = ϕ_μ(μ) = 0. If S_1 ≠ S_2, there is at least one element in S_1 S_2, and so the term is 0. We are therefore left with,
_*_∼()[g()]^2 = ∑_S ⊆ [k]ĝ(S)^2∏_i ∈ S*ϕ_μ(_i)^2.
This is exactly the Fourier expansion for the claimed result.
We'll also use the following proposition.
For any random variable bounded on [-1,1] almost surely and with mean μ,
max*0,[]^2 - μ^2/1 - μ^2≤[ϕ_μ()^2] ≤[] - μ^2/1 - μ^2 .
We expand, using linearity of expectation,
[ϕ_μ()^2] = *( - μ)^2/1 - μ^2 = [ρ^2] - 2μ[] + μ^2/1 - μ^2.
Since [] = μ, we have that [ϕ_μ()^2] = [^2] - μ^2/1 - μ^2. Therefore, by Jensen's inequality,
[]^2 - μ^2/1 - μ^2≤[ϕ_μ()^2].
Furthermore, since ^2 ≤,
[ϕ_μ()^2] ≤[] - μ^2/1 - μ^2.
Lastly, [ϕ_μ()^2] ≥ 0 follows from non-negativity.
Finally, we are ready to prove <Ref>.
For any y ∈^n, we define
g_(y) = [g() | = y].
Then, setting h(y) (g_(y)),
[g()h()] = _**g_()≥_**g_()^2.
Note that, conditioning on = y, the distribution of is still product. Let ν(y) be the mean of this distribution, so that
g_(y) = _∼π_ν(y)*g().
By <Ref>,
_**_∼π_ν()*g()^2 = _μ, ([ϕ_μ(ν()_1)^2], …, [ϕ_μ(ν()_k)^2](g).
For each i ∈ [k],
[ϕ_μ(ν()_i)^2] ≥max*0,_[ν()_i]^2 - μ^2/1 - μ^2<Ref>
≥max*0,_[_iν()_i]^2 - μ^2/1 - μ^2x≥ cx when c ∈
= max*0,_,[_i_i]^2 - μ^2/1 - μ^2Definition of ν(y)
= α_i.
Putting all of the above together,
[g()h()] ≥_μ, ([ϕ_μ(ν()_1)^2], …, [ϕ_μ(ν()_k)^2](g)
≥_μ, ρ(g),
where the final inequality follows from the monotonicity of noise stability.
§.§ Proof of the upper bound on advantage
In this section, we prove the following.
For any g: ^k→, f:^n →, μ_∼[f()], and S_1,…, S_k, define the upper normalized correlation as
β_i _(f, S_i) - μ^2/1 - μ^2.
For S ⊆ [n] × [k] constructed by taking S_1 from the first block, S_2 from the second block, and so on (formally S ∪_i ∈ [k], j ∈ S_i{(j,i)}).. Then,
_^k(g∘ f, S) ≤√(_μ, β(g)).
To begin with, we rewrite advantage in the following form.
For any function q: ^m →, distribution over ^m, and S ⊆ [m], define
q_S, ^(x) _∼[q() |_S = x_S],
where y_S = x_S is shorthand for x_i = y_i for all i ∈ S. Then,
_(q, S) = _∼**q_S, ^().
Consider any S-junta h. Then,
_(q, h) = _∼[
q() h()] = _∼*_∼[q() h() |_S = _S].
Since h is an S-junta, it must classify x and y the same whenever x_S = y_S. Therefore,
(q, h) = _∼*h()_∼[q() |_S = _S]
= _∼*h()q^_S,().
to maximize the above advantage among all h, we set h(x) = (q^_S, (x)), in which case
(q, h) = _∼**q^_S, ().
Given <Ref>, to compute _^k(g∘ f, S), it suffices to understand the function (g ∘ f)^_S,. We proceed to transform that function into a form which is easier to understand.
In the setting of <Ref>, for any x ∈ (^n)^k, let ν(x) ∈ [-1,1]^k be the vector where
ν(x)_i _∼^k[f() | x^(i)_S_i = _S_i].
Then,
(g ∘ f)^_S,^k(x) = _∼π_ν(x)[g()].
Consider drawing ∼ (^n)^k conditioned on _S = x_S. Let = f^⊗ k(). By definition,
(g ∘ f)^_S, ^k(x) = [g()].
Therefore, we merely need to show that the distribution of is that of π_ν(x). For this it is sufficient that,
* Each _1, …, _k is independent. This follows from the fact _1, …, _k are independent, and that the restriction that _S = x_S is a disjoint restriction for each of the k components.
* For each i ∈ [k], that [_i] = ν(x)_i. This follows from the definition of ν(x)_i.
The desired result follows from the fact that π_ν(x) is the unique product distribution over ^k with mean ν(x).
We now prove the upper bound.
Let ν be as defined in <Ref>. Applying it and <Ref>,
_^k(g∘ f, S) = _∼^k**_∼π_ν()[g()]≤√(_∼^k**_∼π_ν()[g()]^2).
The inequality above is Jensen's. Consider the random variables ν()_1, …, ν()_k. The have the following two properties.
* They are independent. This is because the value of ν()_i depends on only the value of _i, which is independent of the other _j for j ≠ i.
* They each have mean μ. This is because,
[ν()_i] = *_∼[f() | (^(i))_S_i = y_S_i] = _∼[f()] = μ.
Therefore, we can use <Ref>:
_∼^k**_∼π_ν()[g()]^2 = _μ, ([ϕ_μ(ν()_1)^2],
…, [ϕ_μ(ν()_k)^2])(g).
We can further upper bound,
[ϕ_μ(ν()_i)^2] ≤[ν()_i] - μ^2/1 - μ^2<Ref>
= (f, S_i) - μ^2/1 - μ^2<Ref>
= β_i.
Putting the above together, we have that
_^k(g∘ f, S) ≤√(_μ, β(g)).
§.§ Proofs of the consequences of our strong composition theorem
In this section, we complete the proofs of <Ref> and <Ref>.
For any partition of the budget junta budget r_1 + ⋯ + r_k = R, let (r_1,…,r_k) be the vector,
(r_1,…,r_k)_i _D(f, r_i).
Then, applying the upper bound on advantage of <Ref> and maximizing over all possible partitions of the budget R, we have that
_^k(g∘ f, R) ≤max_r_1 + ⋯ + r_k = R√(_(r_1, …, r_k)(g)).
This completes the upper bound on the advantage of the optimal R-junta approximator of g ∘ f of <Ref>. For the lower bound on the advantage of the optimal composed form approximator, let r_1, …, r_k be the partition of budget maximizing _(r_1, …, r_k)(g). Using the lower bound of <Ref>, and using (·)^2 to refer to an elementwise squaring of a vector,
_^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≥_(r_1,…,r_k)^2(g).
Using the Fourier expression for stability <Ref>,
_(r_1,…,r_k)^2(g) = _∼_μ(g)*(((r_1,…,r_k)^2)^
=_∼_μ(g)*(((r_1,…,r_k)^)^2
≥_∼_μ(g)*(((r_1,…,r_k)^)^2 Jensen's inequality
= _(r_1,…,r_k)(g)^2.
Therefore, there is a composed form approximator with advantage at least _(r_1, …, r_k)(g)^2.
Our proof of <Ref> uses the following.
For any α_1,…, α_m ∈ [0,1] and β_1, …, β_m ∈ [0,1], satisfying (1-α_i) ≤ 2(1-β_i) for each i ∈ [m],
1 - ∏_i ∈ [m]α_i ≤ 2* 1 - ∏_i ∈ [m]β_i .
We consider the vector β' ∈ [0,1]^m satisfying
1 - α_i = 2 · (1 - β'_i).
Note that β'_i ≥β_i, which means that
1 - ∏_i ∈ [m]β'_i ≤ 1 - ∏_i ∈ [m]β_i.
Now, consider the function q:[0,1] → [0,1] defined as
q(x) 1 - ∏_i ∈ [m]1 - x(1- α_i).
A quick calculation confirms that the second derivative of q is nonpositive, so q is concave. Furthermore, it satisfies,
q(0) = 0,
q(1) = 1 - ∏_i ∈ [m]α_i,
q(1/2) = 1 - ∏_i ∈ [m]β'_i.
We conclude,
1 - ∏_i ∈ [m]α_i concavity of q≤ 2*1 - ∏_i ∈ [m]β'_i≤*1 - ∏_i ∈ [m]β_i.
Let r_1 + ⋯ + r_k = R be the partition of R used in the junta achieving minimum error relative to g ∘ f and define, for each i ∈ [k],
α_i max*0, _(f, r_i)^2 - μ^2/1 - μ^2,
β_i max*0, _(f, r_i) - μ^2/1 - μ^2,
which satisfy the relation
1-α_i ≤ 2(1 - β_i).
Applying <Ref> and the relation = 1 - /2, we have that
_^k(g∘ f, R) ≥1 - √(_μ, β(g))/2, and _^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≤1 - _μ, α(g)/2.
Our goal is to show the following series of inequalities, which would imply the desired result,
1 - _μ, α(g) (iq 1)≤ 2(1 - _μ, β(g)) (iq 2)≤ 4(1 - √(_μ, β(g))).
The second, (inequality 2), follows the fact that for any x ∈ [0,1], (1-x) ≤ 2(1-√(x)). For the first inequality, using <Ref>, we can express stability via the Fourier spectrum of g as
1 - _μ, α(g) = ∑_Sĝ(S)^2(1 - ∏_i ∈ Sα_i)
≤ 2∑_Sĝ(S)^2(1 - ∏_i ∈ Sβ_i) <Ref>, 1-α_i ≤ 2(1 - β_i)
= 2(1 - _μ, β(g)).
This proves inequality 1, giving the desired result.
§ MULTIVARIATE NOISE STABILITY OF SYMMETRIC FUNCTIONS
In this section, we prove <Ref> and <Ref>, connecting the multivariate noise stability of symmetric functions to their univariate noise stability.
For any function g:^k →, a permutation σ:[k]→ [k] is an automorphism of g if for all inputs x ∈^k,
g(x) = g(x_σ(1), …, x_σ(k)).
We say g is symmetric if every permutation of [k] is an automorphism of g. Similarly, g is transitive if for all i,j ∈ [k], there is an automorphism of g sending i to j.
§.§ The upper bound on the multivariate noise stability of symmetric functions
For any symmetric g:^k →, μ∈ (-1,1), and ∈ [0,1]^k, let 1/k ·∑_i ∈ [k]_i. Then,
_μ, (g)≤_μ, (g).
Our proof of <Ref> will use make heavy use of the negative association of random variables.
A set of random variables _1, …, _m supported on are negatively associated if for all disjoint subsets S_1, S_2 ⊆ [m] and S_1-juntas f_1:^m →, S_2-juntas f_2:^m → both monotonically nondecreasing,
[f_1()f_2()] ≤[f_1()][f_2()].
For our purposes, we will only need a few useful facts about negatively associated random variables given in <cit.> (see also <cit.> for a useful overview).
[Permutation distributions are negatively associated, <cit.>]
For any z_1, …, z_m ∈, draw a uniformly random permutation :[m] → [m] and set _i = z_(i) for each i ∈ [k]. Then, _1, …, _m are negatively associated.
[Subsets of negatively associated random variables are negatively associated]
For any 2 ≤ m' ≤ m, if _1, …, _m are negatively associated, then _1, …, _m' are also negatively associated.
[Product consequence of negative association]
For any negatively associated _1, …, _m and nondecreasing f:→_≥ 0,
*∏_i ∈ [m]f(_i)≤∏_i ∈ [m]*f(_i).
Given the above, facts about negative associated random variables, we can now prove <Ref>.
We expand _μ, (g) using the Fourier spectrum of g (<Ref>),
_μ, (g) = _∼_μ(g)[()^].
Let be the distributed the same as || for ∼_μ(g). Then,
_μ, (g) = _*_∼_μ(g)[()^| || = ℓ].
Since g is symmetric, for any |S_1| = |S_2|, ĝ(S_1) = ĝ(S_2). As a result the distribution of ∼_μ(g) conditioned on || = ℓ is simply a uniformly random size-ℓ subset of [k]. Formally,
_μ, (g) = _*_∼[k][()^].
Let _1, …, _k be a uniform random permutation of _1, …, _k. Then, the distribution of ()^ for ∼[k]ℓ is identical to that of ∏_i ∈ [ℓ]_i. By <Ref>, _1, …, _ℓ are negatively associated, and so,
_∼[k]ℓ[()^] = *∏_i ∈ [ℓ]_i(<Ref>)≤∏_i ∈ [ℓ][_i] = *^ℓ.
Therefore,
_μ, (g) ≤_**^ = _μ, (g).
§.§ The lower bound on the multivariate noise stability of symmetric functions
For any transitive g:^k →, μ∈ (-1,1), and ∈ [0,1]^k, let *∏_i ∈ [k]ρ⃗_i^1/k. Then,
_μ, (g)≥_μ, (g).
Note that every transitive g is also symmetric, but the reverse does not hold.
Similarly to the proof of <Ref>, let be the distribution of || when ∼_μ(g). Then,
_μ, (g) = _*_∼_μ(g)[()^| || = ].
For each S ⊆ [k], we'll use χ(S) ∈^k to denote the characteristic vector of S, meaning χ(S)_i [i ∈ S]. Then,
_μ, (g) = _*_∼_μ(g)*∏_i ∈ [k] (_i)^χ()_i | || =
= _*_∼_μ(g)*exp*∑_i ∈ [k]χ()_i log(_i) | || =
≥_*exp*_∼_μ(g)*∑_i ∈ [k]χ()_i log(_i) | || = Jensen's inequality
= _*exp*∑_i ∈ [k]log(_i) _∼_μ(g)*i ∈| || = . Linearity of expectation
Fix any i_1, i_2 ∈ [k] and level ℓ∈ [0,k]. Since g is transitive, there is an automorphism, σ, of g sending i_1 to i_2. Since σ is an automorphism of g, for any S ⊆ [k], for ∼_μ(g), [ = S] = [ = σ(S)]. As a result
_∼_μ(g)*i_1 ∈| || = ℓ = _∼_μ(g)*i_2 ∈| || = ℓ,
and so _∼_μ(g)*i ∈| || = ℓ must be the same for all i ∈ [k]. The sum of these probabilities is ℓ, meaning each is ℓ/k. This allows us to bound,
_μ, (g) ≥_*exp*∑_i ∈ [k]log(_i) ·/k
=_*∏_i ∈ [k]*_i^/k
=_*()^ = _μ, (g).
§.§ Bounding the (δ,)-noise stability of symmetric functions
Recall, from <Ref>, that the (δ,)-noise stability of a function g:^k→ is the quantity
max{_ρ⃗(g)at least δ-fraction of ρ⃗'s coordinates are at most 1-2}.
We prove <Ref>, restated below.
For any symmetric function g:^k →, δ∈ (0,1), and ∈ (0,1/2), let δ'kδ/k be δ rounded up to the nearest integer multiple of 1/k. Then, the (δ, )-noise stability of g is equal to _μ, ρ^⋆(g) for some ρ^⋆ satisfying
1 - 2δ' - 4^2 ≤ρ^⋆≤ 1 - 2δ'.
Since stability is monotone (<Ref>), the (δ, )-noise stability of g is its multivariate noise stability with a correlation vector where δ' fraction of the coordinates are 1 - 2 and the remainder are 1. The arithmetic mean of this vector is exactly 1 - 2δ', and its geometric mean is (1 - 2)^δ'. The desired result then follows from <Ref> and the inequality
(1 - x)^c ≥ 1-cx - (1-c)x^2 ≥ 1 - cx - x^2
which holds for all c,x ∈ [0,1]. To prove this inequality, it is sufficient that q_c(x) ≥ 0 for all x,c ∈ [0,1] where
q_c(x) (1-x)^c - 1 +cx + (1-c)x^2.
To see this, we note that for any c ∈ [0,1], the function q_c(x) has roots at x = 0 and x=1. It is furthermore increasing at x = 0, and decreasing at x = 1. If q_c(x) were to be negative for any x ∈ [0,1], then, it would need to have at least 3 local extrema. However, the derivative q_c'(x) is concave, so it can only be zero at a maximum of 2 points. This proves the desired inequality. (If the reader prefers, <Ref> gives a “proof by picture".)
§ COMPOSITION THEOREMS YIELD BOOSTERS FOR PROPERTY TESTING
§.§ A general boosting framework
Let 𝒫={𝒫_s}_s∈ be a parametrized property of Boolean functions. For a function f:^n→ and distribution 𝒟 over ^n, we write
_𝒟(f,𝒫_s)min_h∈𝒫_s_𝒟(f,h)
to denote f's distance to 𝒫_s over 𝒟. We are interested in the relaxed testing regime for size parameters s>s' where we want to decide whether an unknown target function f belongs to 𝒫_s or is -far from 𝒫_s' under 𝒟: _𝒟(f,𝒫_s')> (recall <Ref>). We say that 𝒫 is (,s,s')-testable if there exists an algorithm for (,s,s')-testing 𝒫 for every distribution 𝒟. As → 0, the gap between the Yes and No cases becomes smaller and (,s,s')-testing becomes more difficult. The main result of this section is that if 𝒫 “behaves well” under function composition, then testers for large can be boosted to testers for the more challenging regime of small . We will specialize our attention to properties which behave linearly with respect to function composition.
A parametrized property 𝒫={𝒫_s}_s∈ behaves linearly (with respect to function composition) if
f∈𝒫_s ⇒ g∘ f∈𝒫_k· s
for all g:^k→, f:^n→, and s∈.
Examples.
Being an s-junta, depth-s decision tree, depth-s formula, or degree-s polynomial are all properties of Boolean functions which behave linearly with respect to composition. As is often the case, it is straightforward to show from their definitions that these properties behave linearly. Many properties which do not a priori behave linearly can be converted into ones that do by applying an appropriate transformation to their size. For example, the property 𝒫_s={size-exp(s) decision trees} behaves linearly.
Strong composition theorems for properties.
A property 𝒫 which behaves linearly with respect to function composition is said to admit a strong composition theorem if the upper bound from <Ref> can be shown to be nearly tight. This definition generalizes the relation <ref>.
A parametrized property 𝒫={𝒫_s}_s∈ admits an (,,λ)-composition theorem with respect to g:^k→ for ,∈ (0,1) and a constant λ>0 if
_𝒟(f,𝒫_s)> ⇒ _𝒟^k(g∘ f,𝒫_λ ks)>
for all f:^n→ and distributions 𝒟 over ^n.
Strong composition theorems depend on the combining function g. For example, if g is a constant function then one would not expect the upper bound from <Ref> to be tight. For this reason, the dependence on g is made explicit in the definition of strong composition theorem.
Roughly speaking, the definition says that if a property 𝒫 behaves linearly and admits a strong composition theorem with respect to g, then composing with g turns a function in 𝒫_s into one in 𝒫_s k and turns a function slightly far from 𝒫_s into one very far from 𝒫_Θ(s k). For a fixed , having an (,,λ)-composition theorem with respect to g becomes stronger as approaches 0. In general, we are interested in (,,λ)-composition theorems when ≫. The parameter λ is built into the definition to tolerate a small amount of slack between the upper and lower bounds on g∘ f. For many applications, this constant factor is necessary. We are now equipped to state our main boosting theorem.
Let 𝒫={𝒫_s}_s∈ be a property which behaves linearly and admits an (,,λ)-composition theorem with respect to g:^k→. If 𝒫 is (,s,s')-testable in q(,s,s') queries, then it is (,s,λ^-1 s')-testable using k· q(,ks,ks') many queries.
Let be an algorithm for (,s,s')-testing 𝒫. Given queries to a function f:^n→ and random samples from a distribution 𝒟 over ^n, we (,s, λ^-1 s')-test 𝒫 using the procedure in <Ref> where is given an instance of (,ks,ks')-testing 𝒫.
Query complexity.
The target g∘ f:^nk→ is a (, ks,ks')-testing instance for . Therefore, makes q(,ks,ks') queries to the target g∘ f:^nk→ before terminating. Our tester makes k queries to f for each query to g∘ f. So our tester for f makes k· q(,ks,ks') queries in total.
Correctness.In the Yes case, f∈𝒫_s. We then have g∘ f∈𝒫_sk since 𝒫 behaves linearly. This ensures that outputs Yes. In the No case, _𝒟(f,𝒫_s'/λ)>. We then have _^k(g∘ f,𝒫_ks')>λ since 𝒫 admits an (,,λ)-composition theorem. This ensures that outputs No.
§.§ Implications for current landscape of junta testing
Our results have new implications for tolerantly testing juntas. In this regime, the Yes case of <Ref> is relaxed to only require that f is close to an r-junta over 𝒟.
Given parameters r≤ r' and ≤, queries to an unknown function f:^n→, and random samples from a distribution 𝒟 over ^n, distinguish between
* Yes: f is -close to being an r-junta under 𝒟, and
* No: f is -far from being an r'-junta under 𝒟.
In all of our applications, we will be using <Ref>, or a variant of it, with g set to _k. For this reason, we start with some useful properties about the noise stability of parity.
§.§.§ Noise stability of parity under general product distributions
For any f:^n →, distribution over ^n, junta budget R, and R-junta h,
_^k(_k ∘ f, h) ≥min_r_1+⋯+r_k=R1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2.
Our proof of <Ref> will use the multivariate noise stability of parity.
For any μ∈ (-1,1), ρ⃗∈ [0,1]^k,
_μ, (_k) = ∏_i ∈ [k]*_i + (1-_i)·μ^2=∏_i ∈ [k]*1 - (1-_i)(1-μ^2).
Note that _k(y_1, …, y_k) = ∏_i ∈ [k]y_i. Therefore,
_μ, (_k) = _∼ (π_μ)^k, *∏_i ∈ [k]_i _i.
Each pair (_i, _i) are independent of another, so
_μ, (_k) = ∏_i ∈ [k]*_i _i.
The distribution of (_i, _i) can be succinctly described: With probability _i, _i = _i. Otherwise, they are each independent draws from π_μ. Therefore,
*_i _i = _i + (1-_i)·μ^2.
The desired result follows from combining the above equations
We apply our strong composition theorem, <Ref>. It is stated in terms of advantage and gives
max_R-juntas h_^k(_k ∘ f, h) ≤max_r_1 + ⋯ + r_k = R√(_μ, β(r_1, …, r_k)(_k)),
where we define μ = _∼[f()], and β(r_1, …, r_k) ∈ [0,1]^k is the vector
β(r_1, …, r_k)_i = _(f, f̃_r_i) - μ^2/1 - μ^2 = 1 - 2·_(f, f̃_r_i) - μ^2/1 - μ^2.
Applying <Ref>,
max_R-juntas h_^k(_k ∘ f, h) ≤max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - *1 -1 - 2·_(f, f̃_r_i) - μ^2/1 - μ^2(1 - μ^2))
= max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - *2·_(f, f̃_r_i)/1 - μ^2(1 - μ^2))
= max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i)).
The desired result follows from = 1 - /2.
§.§.§ Warmup: weak testers suffice for (0,,r,r')-testing juntas
We first boost tolerant testers in the regime where is fixed to 0 in <Ref>. This version is slightly easier to state and is also the version we will use later in proving <Ref>.
If juntas can be (0,,r,r')-tested using q(,r,r') queries, then for all k∈ and λ∈ (0,1), they can be (0,,r,λ^-1 r')-tested in k· q(,kr,kr') queries where
=1-(1-2)^(1-λ)k/2/2.
We will need to following composition theorem for juntas. It is a more precise version of <Ref> stated in terms of <Ref>.
For any λ∈ (0,1), the property of being an r-junta admits an (, ,λ)-composition theorem with respect to _k for any ≤ where
= 1-(1-2)^(1-λ)k/2/2.
Assume that f:^n→ is -far from being an r-junta over 𝒟. We would like to show that _k∘ f is -far from being a λ r k-junta over 𝒟^k where is defined as in the lemma statement. Let r_1+⋯+r_k=λ rk be the partition of the junta budgets which minimizes the expression
1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2
from <Ref>. Let A_≤ r [k] denote the indices for which r_i≤ r and let A_>r=[k]∖ A_≤ r. By a counting argument, at least a (1-λ)-fraction of r_i satisfy r_i≤ r and so |A_≤ r|≥ (1-λ)k. By our assumption that f is far from being an r-junta, for these r_i, we get _𝒟(f,f_r_i)>. Therefore, we can conclude that for any λ rk-junta h:^nk→:
_𝒟^k(_k∘ f,h) ≥1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2<Ref>
=1 - √(∏_i ∈ A_≤ r*1 - 2·_(f, f̃_r_i)·∏_i∈ A_>r*1 - 2·_(f, f̃_r_i))/2
≥1 - √(∏_i ∈ A_≤ r*1 - 2·_(f, f̃_r_i))/2≤1/2
> 1 - *1 - 2^(1-λ)k/2/2_𝒟(f,f_r_i)> for i∈ A_≤ r.
Since h was arbitrary, this shows that _k∘ f is -far from being a λ rk-junta.
<Ref> is stated in the non-tolerant regime. However, we note that the same theorem holds in the (0,,r,r')-testing regime. That is, under the conditions of <Ref>, if 𝒫 is (0,,s,s')-testable, then it is also (0,,s,λ^-1s')-testable. This is because if f is a 0-approximator of f over 𝒟, then g∘f is a 0-approximator of g∘ f over 𝒟^k.
<Ref> shows that the property of being an r-junta admits an (, 1-(1-2)^(1-λ)k/2/2,λ)-composition theorem. Therefore, <Ref> shows that if juntas can be (0,,r,r')-tested in q(,r,r') queries then they can be (, r,r')-tested in k· q(,kr,kr') queries where
=1-(1-2)^(1-λ)k/2/2.
§.§.§ Weak testers suffice for tolerant junta testing
If there is a q(r)-query tester that, given queries to f:^n→ and random samples from a distribution 𝒟, distinguishes between
* Yes: f is 1/4-close to an r-junta, and
* No: f is 1/3-far from every r-junta,
then for every >0 and λ∈ (0,1), there is a q(r/(4))/4-query algorithm that distinguishes between
* Yes: f is -close to an r-junta, and
* No: f is Ω(/1-λ)-far from every λ^-1r-junta.
Let 𝒯 be a q(r)-query tester for juntas that satisfies the theorem statement. Given queries to a function f:^n→ and random samples to , we design an algorithm for (,5/1-λ, r,λ^-1r)-testing f over . The algorithm is straightforward. We choose k=1/4, and run the procedure in <Ref> with g=_k:^k→ and junta size kr.
Query complexity.𝒯 makes q(kr)=q(r/4) queries to the target _k∘ f:^nk→ before it terminates. Our tester makes k queries to f for each query to _k∘ f. Therefore, our tester makes k· q(r/4)=(r/4)/(4) queries in total.
Correctness.For correctness, we need to show:
Yes case: if f is -close to being an r-junta over , then _k∘ f is 1/4-close to being a kr-junta over ^k, and
No case: if f is 5/1-λ-far from being an λ^-1r-junta over , then _k∘ f is 1/3-far from being a kr-junta over ^k.
Yes case.
Let f be an r-junta which -approximates f over . By a union bound:
_∼𝒟^k[XOR_k∘ f()≠XOR_k∘f()] ≤_∼𝒟^k[some f(^(i))≠ f(^(i))]
≤ k·_𝒟(f,f)≤ k = 1/4.
Since _k∘f is a kr-junta, this shows that _k∘ f is 1/4-close to a kr-junta.
No case.
If f is 5/(1-λ)-far from being a λ^-1r-junta, then <Ref> implies that _k∘ f is
1-(1-2)^(1-λ)k/2/2
far from being a λλ^-1kr=kr-junta over ^k where 5/(1-λ). Therefore, it is sufficient to show that 1-(1-2)^(1-λ)k/2/2≥1/3. We observe 2/(1-λ)k≤log_1/3(e)· which implies 3^-2/((1-λ)k)≥ e^-2≥ 1-2. It follows:
1/3≥ (1-2)^(1-λ)k/2
which provides the desired bound.
§.§.§ Hardness of distribution-free tolerant junta testing
We prove the following which implies <Ref>.
Given queries to a function f:^n→ and random samples from a distribution 𝒟, and r≤ n, it is NP-hard under randomized reductions to distinguish between
* Yes: f is 0-close an r-junta over 𝒟, and
* No: f is 1/3-far from every Ω(rlog n)-junta over 𝒟.
We reduce from the SetCover problem.
A SetCover instance over a universe [m] is a collection of subsets 𝒮 = { S_1,…,S_n} where S_i [m]. The SetCover problem is to compute a minimal size subcollection {S_i_1,…, S_i_r} which covers the universe: [m]=S_i_1∪⋯∪ S_i_r.
SetCover is known to be hard to approximate.
Given a SetCover instance 𝒮 and a parameter r, it is NP-hard to distinguish between
* Yes: 𝒮 has a size-r set cover, and
* No: 𝒮 requires set covers of size Ω(rlog n).
Suppose we have an algorithm 𝒯_weak for testing juntas that can distinguish between the Yes and No cases in the theorem statement. In particular, there is a (0,1/3,r,Ω(rlog n))-tester for juntas. <Ref> implies that there is a (0,,r,Ω(rlog n))-tester, 𝒯_strong, for juntas as long as satisfies
1/3≤1-(1-2)^(1-λ)k/2/2⊛.
In the reduction, we will choose appropriately and use this boosted tester to solve SetCover.
The reduction.The reduction from SetCover to junta testing is standard <cit.>. We will restate it here for convenience. Let 𝒮 = { S_1,…,S_n} be a SetCover instance over the universe [m] and define u^(1),…,u^(m)∈^n where
(u^(j))_i =
1 if j ∈ S_i
-1 otherwise.
Let 𝒟 be the uniform distribution over { u^(1),…,u^(m), (-1)^n} and let f:^n→ be the function which is the disjunction of its inputs: f x_1⋯ x_n (where 1 is interpreted as true and -1 as false).
We choose k=Θ(m) so that <ref> holds with Ω(1/m)<<1/m+1. We then run the boosted tester 𝒯_strong on the function f and distribution 𝒟, to test if f is 0-close to an r-junta or -far from being a Ω(rlog n)-junta (where the parameters r and Ω(rlog n) correspond to the SetCover parameters). Our algorithm for SetCover outputs Yes if and only if the tester accepts f as being 0-close to an r-junta.
Runtime.
If the tester 𝒯_weak runs in polynomial time, then since k=Θ(m) and =Θ(1/m), the tester 𝒯_strong runs in polynomial time. Queries to the target function f and random samples from can also be simulated in randomized polynomial time.
Correctness.
For correctness, we need to show:
Yes case: if 𝒮 has a size-r set cover, then f is 0-close to an r-junta over , and
No case: if 𝒮 requires set covers of size Ω(rlog n), then f is -far from being a Ω(klog n)-junta over .
Yes case.
Let S_i_1,…, S_i_r be a size-r set cover. Consider the function f=x_i_1⋯ x_i_r. Since these indices form a set cover of 𝒮, f(u^(i))=1 for all i∈ [m] and f((-1)^n)=-1. This shows _(f,f)=0. It follows that f is 0-close to an r-junta over since f is an r-junta.
No case.
Suppose f is an r'-junta satisfying _𝒟(f,f)< 1/m+1. The relevant variables of f must correspond to a set cover of 𝒮: if some element i∈ [m] is not covered, then f(u^(i))=f((-1)^n) and _𝒟(f,f)≥1/m+1. This shows if 𝒮 requires set covers of size Ω(rlog n) then f is 1/m+1-far from every Ω(rlog n)-junta. In particular, since <1/m+1, every Ω(rlog n)-junta is -far from f.
§ ACKNOWLEDGMENTS
We thank the FOCS reviewers for their helpful comments and feedback. The authors are supported by NSF awards 1942123, 2211237, 2224246 and a Google Research Scholar award. Caleb is also supported by an NDSEG fellowship, and Carmen by a Stanford Computer Science Distinguished Fellowship.
alpha
§ COUNTEREXAMPLES TO NATURAL COMPOSITION THEOREMS
§.§ Counterexample to Conjecture 1
For any odd k and n ≥ k let R = (n-1)k and be the uniform distribution over ^n. There are symmetric functions g:^k → and f:^n → for which the following holds.
* There is an R-junta h achieving,
_^k(g∘ f, h) ≤ O(1/√(k)).
* The natural strategy of dividing the budget equally achieves,
_^k(g∘ f, g ∘f̃_R/k) = 1/2.
We set g = _k to be the majority function on k bits,
g(y_1, …, y_k) =
1 if ∑_i ∈ [k] y_i ≥ 0
-1 otherwise.
and f = _n to be the parity function,
f(x_1, …, x_n) = ∏_i ∈ [n] x_i.
The following fact will be useful in giving a strategy that achieves low error.
Let _1, …, _k-1 each be uniform and independent samples from . Then, for any choice of c,
*∑_i ∈ [k-1]_i = c≤ O*1/√(k).
We now give the junta achieving low error.
Let h = _k-1∘_n. Then,
* h is an ((k-1)n ≤ R)-junta.
* h achieves,
_^k(g∘ f, h) ≤ O(1/√(k)).
Clearly h depends on only the first (k-1)n bits of its inputs, so it is an R-junta as long as (k-1)n ≤ (n-1)k, which is guaranteed by the assumption n≥ k in <Ref>. We compute h's error,
_^k(g∘ f, h) = _∼^n[_k() ≠_k-1()].
In order for _k() ≠_k-1(), it must be the case that the ∑_i ∈ [k-1]_i is -1 or 0. The desired result follows from <Ref>.
We'll next show the natural strategy achieves advantage 0, equivalent to error 1/2.
Let f = _n and be the uniform distribution over ^n. Then,
_(f, f̃_n-1) = 0.
By <Ref>, it is sufficient to show that for any set |S| = n-1 and any x ∈^n,
_∼[f() |_S = x_S] = 0.
For any fixed x, there are two y ∈^n satisfying y_S = x_S: The first choice if y = x, and the second choice is x with a single bit flipped (the one bit not in S). One of these two choices will have a parity of +1 and one will have a parity of -1, so the average parity is 0, as desired.
For any odd k, μ = 0, and = [0,…, 0],
_μ, (_k) = 0.
For odd k, _k is an odd function, _∼^n[_k()]. Then,
_μ, (_k) = __1 ∼^k, _2 ∼^k[_k(_1)_k(_2)]
= __1 ∼^k[_k(_1)]__2 ∼^k[_k(_2)] _1, _2 independent
= 0 · 0 =0._k is odd
The following completes the proof of <Ref>.
In the setting of <Ref>,
_^k(g∘ f, g ∘f̃_R/k) = 0.
This follows from <Ref> and <Ref>.
§.§ Counterexample to Conjecture 2
For any n ≥ 10, k ∈, and R ≤ n/2, let be uniform over ^n. There are g: ^k and f:^n → for which, for all partitions r_1 + ⋯ +r_k = R,
_^k(g∘ f, g(f̃_r_1, …, f̃_r_k)) ≥ 1 - 2^-Ω(k).
<Ref> is particularly surprising in light of the fact that either the constant -1 or constant 1 functions, both of which are 0-juntas, will achieve error ≤ 1/2 with respect to g ∘ f. We begin with a probabilistic construction of f achieving the following.
For any n ≥ 10, there is an f: ^n → for which _∼^n[f()] ≤ 0.5 but, for all |S| ≤ n/2 and x ∈^n,
_∼^n[f() | = x] > 0.
Consider a random function where, for each x ∈^n, (x) ∼π_0.25. We'll show that meets the desired criteria with a strictly positive probability, proving the existence of at least one such f.
Let μ() _∼^n[()]. Then μ() is the average of 2^n independent samples of π_0.25. Applying Hoeffding's inequality,
[μ() > 0.5] ≤exp(-2 · (0.25)^2 · 2^n) = exp(-2^n/2).
Similarly, for any |S| ≤ n/2 and x ∈^n, let μ(, S, x) _∼^n[() | = x]. μ(,S,x) the average of at least 2^n/2 independent samples of π_0.25. Once again, by Hoeffding's inequality,
[μ(,S,x) ≤ 0] ≤exp(-2 · (0.25)^2 · 2^n/2) = exp(-2^n/2/2).
Union bounding over all 2^n choices of S and 2^n choices for x, we have that meets the desired criteria with probability at least
1 - exp(-2^n/2) - 2^2nexp(-2^n/2/2).
When n ≥ 10, the above probability is strictly positive, so such an f must exist.
Let f be a function with the properties of <Ref>, and g = And_k return +1 if and only if all k of its inputs are +1. By <Ref>, for any r ≤ n/2, f̃_r is the constant +1 function. Therefore, for any r_1 + ⋯ + r_k = R, g(f̃_r_1, …, f̃_r_k) is the constant +1 function. However,
_∼^k[(g ∘ f)() = +1] = (3/4)^k.
§.§ Counterexample to Conjecture 3
There is g:^k →, f:^n →, distribution over ^n, and budget R for which no R-junta of composed form achieves optimal error among all R-Juntas for g∘ f with respect to ^k.
We'll set k = 2, g = And_2. Let p:^2 → [0,1] be defined as
p(x)
1 if x_1 = x_2 = 1,
3/4 if x_1 ≠ x_2,
3/5 if x_1 = x_2 = -1.
We begin by describing a probabilistic construction: Given the input x, the value of (x) will still be a random variable. In particular, we set n =2, and (x) is set to +1 with probability p(x) and -1 otherwise. This probabilistic construction will later be derandomized. We allow a junta budget of R = 4.
Next, we construct an optimal approximator for g ∘. Given an input x^(1), x^(2), let _1 = (x^(1)) and _2 = (x^(2)). For succinctness, we'll use p_i to refer to the [_i = 1]. Then, since g = And_2, the optimal approximator will return 1 iff p_1p_2 ≥ 1/2. For our particular the only choices for p_i are 3/5,3/4,1. As a result,
h^(opt)(p_1,p_2) =
1 if p_1 = 1 or p_2 = 1,
1 if p_1 = p_2 = 3/4,
0 otherwise.
However, no composed form can achieve the above optimal approximator. Recall that composed form approximators are of the form h(q_1, q_2), where each q_i has range . The fact that the size of this range is 2, but there are three possible choices (3/5, 3/4, 1) for p_i, is the crux of the issue.
In more detail, of the three choices (3/5,3/4,1) for p_i, q_1 must classify at least two of them the same way. This gives three cases.
* If q_1 classifies 3/4 and 1 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/4, p_2 = 3/5 and p_1 = 1, p_2 = 3/5, and so cannot be optimal.
* If q_1 classifies 3/5 and 3/4 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/4, p_2 = 3/4 and p_1 = 3/5, p_2 = 3/4, and so cannot be optimal.
* If q_1 classifies 3/5 and 1 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/5, p_2 = 3/4 and p_1 = 1, p_2 = 3/4, and so cannot be optimal.
In all three cases composed form cannot achieve optimal error. It will always be off by some constant.
To derandomize this construction, we set n ≫ 2 sufficiently large. For each x ∈^n, we sample the value f(x) to be +1 with probability p(x_1,x_2) and -1 otherwise. Note that after randomly selecting the value of f on each input x ∈^n, f is now a deterministic function. Following the same arguments as in <Ref>, with high probability over the random choices in defining f, the error of the optimal 4-junta and of the optimal composed form 4-junta for g∘ f are within ±(n) of what they are for g ∘, where (n) goes to 0 as n →∞. Therefore, for sufficiently large n, there exists an f meeting the desired criteria.
|
http://arxiv.org/abs/2307.04334v1 | 20230710041019 | Quasicrystalline second-order topological semimetals | [
"Rui Chen",
"Bin Zhou",
"Dong-Hui Xu"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Department of Physics, Hubei University, Wuhan 430062, China
Department of Physics, Hubei University, Wuhan 430062, China
[][email protected]
Department of Physics, Chongqing University, Chongqing 400044, China
Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 400044, China
Three-dimensional higher-order topological semimetals in crystalline systems exhibit higher-order Fermi arcs on one-dimensional hinges, challenging the conventional bulk-boundary correspondence. However, the existence of higher-order Fermi arc states in aperiodic quasicrystalline systems remains uncertain. In this work, we present the emergence of three-dimensional quasicrystalline second-order topological semimetal phases by vertically stacking two-dimensional quasicrystalline second-order topological insulators. These quasicrystalline topological semimetal phases are protected by rotational symmetries forbidden in crystals, and are characterized by topological hinge Fermi arcs connecting fourfold degenerate Dirac-like points in the spectrum. Our findings reveal an intriguing class of higher-order topological phases in quasicrystalline systems, shedding light on their unique properties.
Quasicrystalline second-order topological semimetals
Dong-Hui Xu
August 12, 2023
====================================================
§ INTRODUCTION
Symmetry-protected topological phases of matter have emerged as a major new theme in modern condensed-matter physics in the past nearly two decades. While the discovery of topological insulators initially sparked interest in this field, recent focus has shifted towards exploring higher-order topological insulators <cit.>. Unlike traditional topological insulators, higher-order topological insulators exhibit unconventional bulk-boundary correspondence, allowing for the existence of gapless boundary excitations of higher co-dimensions. For example, a second-order topological insulator (SOTI) in two dimensions hosts robust gapless boundary modes localized at its zero-dimensional corners, dubbed corner modes <cit.>, while three-dimensional (3D) SOTIs support gapless boundary modes confined to their one-dimensional hinges <cit.>. In addition to higher-order topological insulators, higher-order topological semimetals have also been identified. These semimetals, including higher-order Dirac semimetals and higher-order Weyl semimetals, exhibit exotic hinge Fermi arcs that connect the projected nodes on the hinges, distinguishing them from conventional Dirac and Weyl semimetals <cit.>.
Initially, topological phases were observed in crystalline materials. However, more recently, researchers have extended these phases to aperiodic quasicrystalline systems, which lack discrete translational symmetry <cit.>. The absence of translational symmetry allows for the presence of rotational symmetries that are prohibited in crystals. This property enables the existence of new topological phases without crystalline counterparts, such as two-dimensional (2D) SOTIs protected by eightfold <cit.> and twelvefold <cit.> rotational symmetries. Moreover, a 3D time-reversal symmetry (TRS) breaking gapless topological phase hosting Weyl-like points has been proposed in a quasicrystal stack of Chern insulators <cit.>.
However, gapless phases with higher-order topology in quasicrystalline systems have yet to be discovered. This knowledge gap motivates us to explore the possibility of gapless quasicrystalline higher-order topological phases using a stacking approach with 2D quasicrystalline SOTIs. It has been demonstrated that stacking 2D topological materials provides a natural way of realizing 3D topological phases. This approach has been successful in achieving various topological phases, including Weyl semimetals <cit.>, axion insulators <cit.>, hinged quantum spin Hall insulators <cit.>, and high-Chern number quantum anomalous Hall insulators <cit.>.
In this work, we present the discovery of a quasicrystalline second-order topological semimetal (SOTSM) phase obtained by stacking 2D quasicrystalline SOTIs along the vertical direction (Fig. <ref>). The distinctive feature of the quasicrystalline SOTSM is the presence of rotation-symmetry-protected topological hinge Fermi arcs that terminate at fourfold degenerate Dirac-like points in the spectrum. The C_n^z-symmetric quasicrystalline SOTSM can support n topological hinge Fermi arcs (see the second column in Fig. <ref>), inheriting their topological nature from C_n^z-symmetric quasicrystalline SOTI hosting n corner modes (see the first column in Fig. <ref>). The number n can be four [Figs. <ref>(a) and <ref>(b)], as allowed in crystalline systems <cit.>, but it can also be eight [Figs. <ref>(c) and <ref>(d)] and twelve [Figs. <ref>(e) and <ref>(f)], which are typically forbidden in crystalline systems. Furthermore, we present the phase diagram of the stacked systems and identify a 3D quasicrystalline SOTI phase in addition to the quasicrystalline SOTSM phase. Finally, we show that the disclination-induced bound states can further reveal the topological nature of the quasicrystalline SOTSM phase.
This work is organized as follows. We first give a simple review of 2D quasicrystalline SOTI in Sec. <ref> and show a stack of it gives rise to the 3D quasicrystalline SOTSM phase with Dirac-like points in the spectrum in Sec. <ref>. A detailed discussion on Dirac-like points is presented in Sec. <ref>. Subsequently, we illustrate the phase diagram of the stacked quasicrystalline system in Sec. <ref> and investigate the disclination-induced bound state in Sec. <ref>. We summarize our conclusions and discuss possible experimental
schemes for the quasicrystalline SOTSM phase in Sec. <ref>.
§ REVIEW OF 2D QUASICRYSTALLINE SOTIS
2D quasicrystalline SOTIs had been proposed in
eightfold symmetric Ammann-Beenker-tiling (AB-tiling) quasicrystal <cit.> [Figs. <ref>(a) and <ref>(c)] and twelvefold symmetric Stampfli-tiling quasicrystal <cit.> [Fig. <ref>(e)]. The AB-tiling quasicrystal consists of two types of primitive tiles: square tiles (yellow) and rhombus tiles (green) with a small angle 45^∘. The Stampfli-tiling quasicrystal consists of three types of primitive tiles:
square tiles (yellow), regular triangle tiles (red), and rhombus tiles (green) with a small angle 30^∘.
In the tight-binding model, the lattice sites are placed on the vertices of each tile. The Hamiltonian of the 2D quasicrystalline SOTI contains two parts, H(M)=H_1st(M)+H_m <cit.>. The first part denotes a 2D first-order topological insulator protected by TRS
H_1st(M) = -∑_j≠ kZ(r_jk)/2[it_1( s _3τ _1cosϕ_jk+s _0τ _2sinϕ_jk)
+ t_2s _0τ_3] c_j^†c_k+∑_j(M+2t_2)s _0τ _3 c_j^†c_j,
where c^†_jα=(c^†_jα↑,c^†_jα↓) are electron creation operators at site j with the orbital α. t_1 and t_2 are hopping amplitudes, and M denotes the Dirac mass, together with t_2, determining the first-order topology. s_1,2,3 and τ_1,2,3 are the Pauli matrices acting on the spin and orbital spaces, respectively. s_0 is the 2× 2 identity matrix. ϕ_jk is the azimuthal angle of the bond between site j and k with respect to the horizontal direction. Z( r_jk) = e^1-r_jk/ξ is the spatial decay factor of hopping amplitudes with the decay length ξ.
The second part is a TRS breaking Wilson mass term, which is
H_m(η)=g∑_j≠ kZ(r_jk)/2cos( ηϕ_jk) s _1τ _1 c_j^†c_k,
where g and η describe the magnitude and varying period of the Wilson mass, respectively. H_m(η) are responsible for higher-order topology <cit.>. In the subsequent calculations, we fix the side length of the tiles as a=1 (white lines connecting the vertices in Fig. <ref>) and ξ=t_1=1.
For η=2,4,6, the Wilson mass gives rise to the SOTI phases in quasicrystals hosting four, eight, and twelve corner modes protected by the combined symmetry C_4^z U <cit.>, C_8^z U <cit.>, and C_12^z U <cit.>, respectively, where C_n^z is the n-fold rotational operation, and U could be the TRS operation T=i s_2τ_0 K or the mirror symmetry operation m_z=s_3 τ_0. K is the complex conjugation operator. The symmetry-protected eightfold and twelvefold corner modes, which are impossible in crystals <cit.>, are distinguishing characteristics of the 2D quasicrystalline SOTIs. Additionally, these corner modes are pinned to zero energy due to the existence of particle-hole symmetry.
The emergence of the zero-energy corner modes can be simply understood as follows <cit.>: g opens a gap in the first-order topological edge states and then induces Wilson mass kinks near the boundary. If one corner mode |ψ_c> appears at 𝐫_c, where the Wilson mass flips the sign, then the C_n^z U symmetry ensures that the number of corner modes is n. Because C_n^z U|ψ_c> is also the eigenstate of the system, which is localized at another corner by rotating 𝐫_c by an angle of 2π/n.
§ 3D QUASICRYSTALLINE SOTSMS
3D crystalline SOTSMs have been constructed by stacking 2D crystalline SOTIs along the vertical direction <cit.>. 3D quasicrystalline SOTSM phases can be achieved in a similar manner, i.e., by periodically staking 2D quasicrystalline SOTIs with an orbital-dependent hopping t_z s_0τ_3 on each site <cit.>. After Fourier transformation applied to the vertical direction z, the 3D stacked Hamiltonian can be expressed as
H_3D=∑_k_zH(M-2t_zcos k_z).
The conduction and valence bands of in this model have double degeneracy because of the presence of the combination of TRS and inversion symmetry PT <cit.>, where P=s_0τ_3 is the inversion symmetry operator. It is necessary to point out that when η=2, applying the stacked Hamiltonian to periodic cubic lattices will give birth to a 3D crystalline SOTSM <cit.> (see Appendix <ref>) with four hinge Fermi arcs connecting the projection of fourfold degenerate Dirac points that are well defined in the momentum space. Next, we investigate the situation where the Hamiltonian is defined on a stack of 2D quasicrystals.
§.§ η=2
We first consider a 3D quasicrystal [Fig. <ref>(b)] by stacking 2D AB-tiling quasicrystals with the square-shaped boundary [Fig. <ref>(a)] and set the varying period of Wilson mass η=2. Figure <ref>(a) shows the spectral function 𝒜 (E_F,k_z) of the 3D quasicrystalline system with open-boundary condition in the xy-plane. We can see that the bulk conduction and valence bands touch at two discrete points k_z=± k_z^1 where the energy gap is closed, indicating a semimetal phase. Importantly, fourfold degenerate zero-energy flat band boundary states emerge in the region |k_z|>k_z^1, describing hinge Fermi arc states in this semimetal phase. Figure <ref>(c) displays the probability density distribution of the zero-energy states at k_z=-2 [marked by the green star in Fig. <ref>(a)].
Figure <ref>(b) illustrates the spectral function of the quasicrystalline system with periodic boundary conditions along all the directions. The periodic boundary condition in the xy-plane is achieved by treating the system as a crystal with a supercell periodicity. Comparing to the spectral function under open boundary condition in Fig. <ref>(a), the zero-energy flat band boundary states disappear, further confirming that the zero-energy modes in between ± k_z^1 are hinge Fermi arc states. Moreover, the higher-order topology of the hinge Fermi arcs is revealed by the quantized quadrupole moment Q_xy=0.5 for |k_z|>k_z^1 [Fig. <ref>(d)]. Therefore, the system is identified as a quasicrystalline SOTSM.
The bulk spectral function versus k_z exhibits a linear dispersion near the gap closing points at ± k_z^1 [Fig. <ref>(b)]. Meanwhile, the density of states around the gap closing points is parabolic, as shown in the inset of Fig. <ref>(b), which identifies the well-known bulk signatures of Dirac points in crystalline systems <cit.>. These features suggest that the gapless points in the present system are Dirac points in quasicrystals. However, as discussed in Sec. <ref>, a more detailed analysis reveals that the situation is complex.
§.§ η=4 and η=6
Now, we come to the case of η=4 and η=6, which can give rise to 2D quasicrystalline SOTIs without crystalline counterpart <cit.>. Here, the 3D quasicrystalline systems are stacked by the AB-tiling octagonal quasicrystal [Figs. <ref>(c) and <ref>(d)] and the Stampfli-tiling dodecagonal quasicrystal [Figs. <ref>(e) and <ref>(f)], respectively. Figures <ref>(a) and <ref>(b) show the spectral function 𝒜 (E_F,k_z) of the two 3D quasicrystalline systems with open boundary condition in the xy-plane and periodic boundary condition along the vertical direction. The spectral functions look similar to that shown in Fig. <ref>(a), however, the degeneracy of zero-energy modes is different.
These zero-energy flat-band boundary modes in the region |k_z|>k_z^1 are hinge Fermi arc states traveling on the hinges of 3D octagonal/dodecagonal quasicrystals. This can be observed more clearly in Figs. <ref>(e) and <ref>(f), which show the energy spectra and the probability distributions of the zero-energy modes for fixed k_z marked by the green stars shown in Figs. <ref>(a) and <ref>(a), respectively. Apparently, the hinge Fermi arc states are inherited from the C_n^z-symmetric corner modes in quasicrystalline SOTIs, where n=8 in the AB-tiling octagonal quasicrystal and n=12 in the Stampfli-tiling dodecagonal quasicrystal.
To diagonalize the electronic structure of bulk state, we plot the spectral function under periodic boundary conditions along all the three directions in Figs. <ref>(c) and <ref>(d). As seen in the case with η=2, similar phenomena are observed, such as the disappearance of zero-energy hinge arcs, a linear dispersion along k_z, and the quadratic density of states around the gap closing points.
Therefore, our study demonstrates that stacking 2D quasicrystals can result in the emergence of an exotic topological phase of matter i.e., the quasicrystalline SOTSMs, which possesses eight and twelve hinge Fermi arcs protected by forbidden rotation symmetries in crystalline systems. Our findings highlight the potential for stacking 2D quasicrystals and expand our understanding of condensed matter physics.
§ DIRAC-LIKE POINTS
Upon initial inspection, the gap closing points near k_z=± k_z^1,2,3 shown in Figs. <ref>(b), <ref>(c) and <ref>(d) are reminiscent of the Dirac point characterized by the massless Dirac equation. They both exhibit a linear dispersion along k_z and a unique quadratic density of state near the gap closing points. However, a closer inspection of the spectrum reveals that the gap closing points in quasicrystalline SOTSMs are distinct from those in crystalline second-order topological Dirac semimetals (SODSMs).
Figure <ref>(a) shows the spectrum near the gap closing point k_z^1 in the SOTSM of η=2 under periodic boundary conditions along all the directions [see Fig. <ref>(b)]. There appear three band-crossing points, which is quite different from the crystalline SODSM phase that hosts only one band-crossing point [Fig. <ref>(e)]. Figures <ref>(b) and <ref>(e) show the wave function of the states marked by the red and green stars in Fig. <ref>(a), respectively. One of the band-crossing is dominated by the local patch containing three square tiles and two rhombus tiles [Figs. <ref>(b) and <ref>(c)], and the other band-crossing is dominated by the local patch containing six rhombus tiles [Figs. <ref>(d) and <ref>(e)]. The appearance of multiple band-crossing points is because gap closes at different k_z for distinct kinds of local patches. This phenomenon is attributed to the absence of discrete translational symmetry in quasicrystalline systems.
For the AB-tiling octagonal quasicrystal with η=4, the spectrum opens a tiny energy gap [Fig. <ref>(f)]. The size of the energy gap decays with the increase of size [Fig. <ref>(g)]. For the Stampfli-tiling quasicrystal with η=6, the spectrum is similar to the case with η=2, except that there appear more band crossings. This is because there are more different patterns of local patches in Stampfli-tiling quasicrystal.
Though the gap-closing points in quasicrystalline SOTSMs manifest several similarities compared to the Dirac points in crystalline SODSMs. However, we found the fine structure of the gap-closing points due to the absence of translational symmetry by further checking the spectrum. Therefore, we dub these gap closing points in the quasicrystalline SOTSM phase as Dirac-like points.
§ PHASE DIAGRAM
We present the topological phase diagram of the stacked quasicrystal system in this section.
Figures <ref>(a)-<ref>(b) show ln E_g and Q_xy as functions of the momentum k_z and the parameter M for the AB-tiling quasicrystalline square system with η=2. E_g is the value of the energy gap obtained under periodic boundary conditions along all the three directions. Each point along the white line corresponds to the gap-closing point shown in Fig. <ref>(b). For about -5.7<M<0.3, the existence of the gap closure with the accompanying topological phase transition between Q_xy=0 and Q_xy=0.5 indicates that the system corresponds to the SOTSM phase.
For about M>0.3, the system corresponds to a 3D quasicrystalline SOTI phase with a topological gap characterized by a quantized quadrupole moment Q_xy=0.5 for any k_z. For about M<-5.7, the system is a normal insulator (NI) with a topologically trivial gap.
Above we only consider the case of η=2 in the AB-tiling quasicrystal. In the cases of the AB-tiling octagonal quasicrystal with η=4 and the Stampfli-tiling dodecagonal quasicrystal with η=6, we find similar results by adjusting the parameter M, i.e., the systems also support the quasicrystalline SOTSM phase, the 3D quasicrystalline SOTI phase, and the NI phase.
§ DISCLINATION-INDUCED BOUND STATES
Disclination-induced bound states provide a potential probe of crystalline topology, which has been widely investigated in different topological systems <cit.>. Recently, disclination-induced bound states have been observed in topological crystalline insulators <cit.>, acoustic topological insulators <cit.>, and acoustic Weyl semimetals <cit.>. In this section, we study the disclination-induced bound states in the quasicrystalline SOTSM phase.
The disclination is introduced by cutting out a specific segment [the first column in Fig. <ref>] and then glue the lattice back together [the second column in Fig. <ref>]. The two sides of the cut are glued together by identifying sites on the two sides of the cut related by rotational symmetry, which is called a Volterra process <cit.>. The defects break the rotational symmetry locally at the center of lattice, but the rest preserves the rotational symmetry and is indistinguishable from the bulk of the original system without the cut.
The corresponding spectral function of sample geometries in Figs. <ref>(a)-<ref>(b), Figs. <ref>(c)-<ref>(d), and Figs. <ref>(e)-<ref>(f) are similar to Fig. <ref>(a), Fig. <ref>(a), and Fig. <ref>(b), respectively, except that the spatial probability distributions are different for the zero-energy modes. The colored points in Fig. <ref> display the probability distributions of the zero-energy modes in these systems with k_z=-2.
For the three different disclination systems in Figs. <ref>(b), <ref>(d), and <ref>(f), they both host one zero-energy mode at the disclination core, and three, seven, and eleven zero-energy modes at the hinges of the systems, respectively. Moreover, similar to the zero-energy hinge modes, the disclination modes only appear for |k_z|>k_z^1/2/3, and disappear in the regions of |k_z|<k_z^1/2/3. This further reveals that the disclination-induced bound states and the hinge Fermi arc states are the consequence of nontrivial bulk topology, which cannot be removed without topologically trivializing the bulk of systems <cit.>. Moreover, the k_z-dependent disclination-induced bound states provide an experimental probe for the quasicrystalline SOTSM phase.
§ CONCLUSION AND DISCUSSION
In conclusion, this study has demonstrated that a stack of 2D quasicrystalline SOTIs can give rise to 3D quasicrystalline SOTSM phases. These 3D phases exhibit rotation-symmetry protected hinge Fermi arcs, which are forbidden in crystalline systems. Additionally, our calculations have shown that the stacked systems also support the 3D quasicrystalline SOTI phase, as evidenced by the phase diagram. We have proposed that the dependence of k_z on disclination-induced bound states can serve as an experimental probe for the quasicrystalline SOTSM phase.
While the quasicrystalline SOTSM shares similarities with the crystalline SODSM <cit.>, there are three main distinctions between them. Firstly, the number of C_n^z-symmetry protected hinge Fermi arcs in the quasicrystalline SOTSM is not limited to four, as observed in crystalline SODSM, but can be eight or twelve as well. Secondly, in the quasicrystalline SOTSM, the lack of translational symmetry renders the in-plane momentum ineffective as a quantum number, making it impossible to define Dirac points in momentum space, unlike in crystalline SODSM where the Dirac equation applies. Lastly, the spectrum of the quasicrystalline SOTSM exhibits a higher number of band-crossing points compared to the crystalline SODSM, a consequence of the absence of in-plane translational symmetry in the stacked quasicrystals.
Moreover, recent experiments investigating the stack of Ta_1.6Te quasicrystal layers <cit.>, along with first-principles calculations and symmetry analysis, have revealed a symmetry-protected semimetal phase and explored the topological properties of the material. This suggests that the quasicrystalline SOTSM phase can be experimentally realized in real materials. Furthermore, considering the successful experimental realization of the 2D quasicrystalline SOTI phase in electrical circuit systems <cit.>, we believe that the quasicrystalline SOTSM holds promise in metamaterials. These unique features and possibilities offer exciting prospects for the future implementation of our proposal.
D.-H.X. was supported by the NSFC (under Grant Nos. 12074108 and 12147102), the Natural Science Foundation of Chongqing (Grant No. CSTB2022NSCQ-MSX0568). R.C. acknowledges the support of the Chutian Scholars Program in Hubei Province. B.Z. was supported by the NSFC (under Grant No. 12074107), the program of outstanding young and middle-aged scientific and technological innovation team of colleges and universities in Hubei Province (under Grant No. T2020001) and the innovation group project of the natural science foundation of Hubei Province of China (under Grant No. 2022CFA012).
§ CRYSTALLINE SODSM
To make a comparative study, we investigate the 3D crystalline SODSM phase [Fig. <ref>(b)], modeled by staking 2D crystalline SOTIs along the vertical direction [Fig. <ref>(a)]. Figures <ref>(c) and <ref>(d) show the spectral function of the crystalline system with open and periodic boundary conditions in xy-plane, respectively. Hinge Fermi arcs appear and connect the band-closing points at k_z=± k_z^4. The results are similar to those in Figs. <ref>(a)-<ref>(b). Figure <ref>(e) shows the spectrum near the band-closing point -k_c^4. Only one band-crossing point is observed because of the existence of transitional symmetry in crystalline systems. This is observed more clearly in Fig. <ref>(f). The
probability distribution of the state labeled by green star [Fig. <ref>(e)] is uniformly distributed and all the local patches undergo the topological phase transition simultaneously when k_z varies.
Moreover, we find that the low-energy effective Hamiltonian can be described by the massless Dirac equation. Therefore, the system is identified as the crystalline SODSM phase.
apsrev4-1-etal-title_6authors
|
http://arxiv.org/abs/2307.06169v1 | 20230712135050 | Counting double cosets with application to generic 3-manifolds | [
"Suzhen Han",
"Wenyuan Yang",
"Yanqing Zou"
] | math.GR | [
"math.GR",
"math.GT"
] |
calc,automata
[subfigure]labelformat=simple
comment
yes
[t]1.25in[footnote]
yes
TheoremTheorem[section]
Lemma[Theorem]Lemma
Claim[Theorem]Claim
Proposition[Theorem]Proposition
Corollary[Theorem]Corollary
definition
Definition[Theorem]Definition
Remark[Theorem]Remark
Example[Theorem]Example
Examples[Theorem]Examples
Non-examples[Theorem]Non-examples
Convention[Theorem]Convention
Question[Theorem]Question
Counting double cosets with applications]Counting double cosets with application to generic 3-manifolds
Academy of Mathematics and Systems Science
Chinese Academy of Sciences
, Beijing 100190, P. R. China.
[email protected]
Beijing International Center for Mathematical Research
Peking University
Beijing 100871, China P.R.
[email protected]
School of Mathematical Sciences and Shanghai Key Laboratory of PMMP
East China Normal University
Shanghai 200241, China P.R.
[email protected]
[
Yanqing Zou
August 12, 2023
===================
We study the growth of double cosets in the class of groups with contracting elements, including relatively hyperbolic groups, CAT(0) groups and mapping class groups among others.
Generalizing a recent work of Gitik and Rips about hyperbolic groups, we prove that the double coset growth of two Morse subgroups of infinite index is comparable with the orbital growth function. The same result is further obtained for a more general class of subgroups whose limit sets are proper subsets in the entire limit set of the ambient group.
As an application, we confirm a conjecture of Maher that hyperbolic 3-manifolds are exponentially generic in the set of 3-manifolds built from Heegaard splitting using complexity in Teichmüller metric.
§ INTRODUCTION
Since the seminar work of Milnor, the growth of groups has been a subject of research for a long time with still growing huge body of results in the literature. While the growth of other objects in groups such as subgroups and their cosets have also been investigated by various authors, the double coset growth for a pair of subgroups was proposed by de la Harpe in his book <cit.>, but is recently receiving attention from the GGT community. Gitik-Rips <cit.> showed in the class of hyperbolic groups that on one hand, the growth function of double cosets for infinite index quasi-convex subgroups is comparable with the growth function of the ambient group, and on the other hand, any reasonable double coset growth could be realizable for certain non-quasi-convex (normal) subgroups. In dynamical system, the double coset growth has actually been studied under the guise of counting shortest essential arcs between convex sub-manifolds in the ambient Riemannian manifold. We refer the reader to the work of Parkkonen-Paulin <cit.>, and the survey <cit.> for more relevant results and applications. Particularly, the first part of Gitik-Rips results was a special case of <cit.> which holds with greater generality for discrete group actions on Gromov hyperbolic spaces. From a view of coarse geometry, the main purpose of this paper is to provide a far-reaching generalization of these results in the class of groups with contracting elements, which includes many negatively curved groups such as (relatively) hyperbolic groups, mapping class groups and CAT(0) groups with rank-1 elements. This shall give applications to the genericity of 3-manifolds, and the counting of common perpendiculars between subsets in general metric spaces.
Let us first introduce the abstract setup for counting double cosets.
Let G be a countable group equipped with a proper left invariant pseudo-metric d_G, so the ball of radius r at the identity 1
B_G(r)={g∈ G: d_G(1, g)≤ r}
is finite. The function gr_G(r)=♯ B_G(r) is called the growth function of G with respect to d.
One of the main goals of this paper is to study the growth function of double cosets. Namely, for any two subgroups H,K≤ G, the double coset growth function gr_H,K(r) counts the number of double cosets HgK intersecting the ball of radius r, i.e.
gr_H,K(r)=♯ B_H,K(r)
where B_H,K(r)={HgK: d_G(HgK,1):=min_h∈ H,k∈ Kd_G(hgk,1)<r}. If H and K are trivial, then gr_H,K(r)=gr_G(r) and in general, we have gr_H,K(r)≤gr_G(r) for any r≥ 0. If H=K are normal subgroups, it is an elementary observation in <cit.> (recalled as Lemma <ref>) that gr_H,H(r) is exactly the growth function of the quotient G/H endowed with quotient metric. Due to this reason, we consider only non-normal subgroups in what follows.
In this paper, the proper left invariant pseudo-metric on G comes from a proper and isometric action of G on a proper geodesic metric space (X,d). Fixing a basepoint o∈ X induces a pseudo-metric on G by d_G(g_1,g_2):=d(g_1o,g_2o) for any g_1,g_2∈ G. We always assume that group G is non-elementary: it is neither virtually cyclic nor finite.
We say a subset A⊆ X is strongly contracting if the shortest point projection π_A(B) has uniformly bounded diameter for any ball B disjoint from A. An element g∈ G of infinite order is called strongly contracting if it acts by translation on a contracting quasi-geodesic. We remark that several different notions of contracting elements exist in literature, for which we refer the reader to <cit.> for details. Since this is the only notion used through out the paper, we simply call contracting elements / subsets to be consistent with the use in <cit.>.
§.§ Double coset growth for Morse subgroups
As a warm-up, we first present our counting results for double cosets of a pair of Morse subgroups before stating the more general result. The class of Morse subgroups have received much attention in recent years (<cit.>).
A subset A⊆ X is called Morse if it is quasi-convex with respect to all quasi-geodesics of any fixed parameters. A subgroup H is called Morse if it acts cocompactly on a Morse subset. It is well known that a contracting subset is Morse, so a contracting element generates a cyclic Morse subgroup.
In general, a Morse subgroup could be much more complicated.
We say that a group G has purely exponential growth if there exist M_0,M_1>0,ω>1 so that
∀ r≥ 1: M_0ω^r≤gr_G(r)≤ M_1ω^r
for which we shall write gr_G(r)≍ω^r for simplicity.
The value ω is called growth rate (or critical exponent) and can be defined a prior as follows
ω =lim sup_n→∞lngr_G(r)/r.
Our first result is that the growth of double cosets for any pair of Morse subgroups of infinite index is bounded below by the growth function.
Suppose that a non-elementary group G acts properly on a proper geodesic metric space with a contracting element. Assume that H and K are Morse subgroups of infinite index. Then
there exist δ, r_0>0 so that for any r>r_0,
gr_H,K(r)≥δ·gr_G(r-r_0)
In particular, if G has purely exponential growth, then gr_H,K(r)≍ω^r for some ω>1.
By <cit.>, hyperbolic groups have purely exponential growth. Quasi-convex subgroups are exactly Morse subgroups, so Theorem <ref> generalizes <cit.> about growth of double cosets for quasi-convex subgroups in hyperbolic groups.
A class of statistically convex-cocompact (SCC) actions was introduced in <cit.> as a generalization of convex-cocompact (thus any cocompact) actions in a statistical sense. Such group actions are proved there to have purely exponential growth <cit.>. See §<ref> for precise definitions and relevant facts. This is the main class of groups we shall be interested in later on.
Suppose that a non-elementary group G acts statistically convex-cocompactly on a geodesic metric space with a contracting element.
Then for any two Morse subgroups H,K≤ G of infinite index, gr_H,K(r)≍ω^r for some ω>1.
§.§ Double coset growth for subgroups of second type
We now extend Theorem <ref> to a much large class of subgroups defined using boundary, which properly contains Morse subgroups of infinite index. This is based on an axiomized notion of convergence boundary in <cit.>, which compass Gromov boundary of hyperbolic spaces, Bowditch boundary (<cit.>) or Floyd boundary (<cit.>) of relatively hyperbolic groups, visual boundary of CAT(0) spaces (<cit.>) and Thurston boundary of Teichmüller spaces. This set of two axioms recalled below enable to introduce a good notion of limit sets for a non-elementary group with the following two desired properties:
* The limit set is the minimal invariant closed subset in the boundary.
* The limit set is the set of accumulation points of group orbits in the space.
Before stating our general result, let us first describe the main application for double cosets in the following various classes of groups.
Suppose that a triple (G, X, ∂ X), where a finitely generated group G acts properly on a proper length space X compactified by a boundary ∂ X, is given by one of the following:
* a hyperbolic group G acts geometrically on a hyperbolic space with Gromov boundary;
* a relatively hyperbolic group G acts on its Cayley graph with Bowditch boundary;
* a finitely generated group G acts on its Cayley graph with nontrivial Floyd boundary;
* a group G acts geometrically with rank-1 element on a CAT(0) space with visual boundary;
* the mapping class group G acts on the Teichmüller space X with Thurston boundary.
Let H,K≤ G be any two subgroups having limit sets as proper subsets in that of G. Then gr_H,K(r)≍ω^r for some ω>1.
In the above items (1) – (3), the limit sets have the two desirable properties as above, which are well-known consequences of convergence group actions. See <cit.> for more details. The item (4) follows from the work of Hamenstadt <cit.>. The last one is more subtle, requiring a further explanation of the above-mentioned compactification with convergence property.
A compact metric space X is a compactification of X if X embeds into X as an open and dense subset, and ∂ X:=X∖ X is called the boundary. We assume that the isometry group Isom(X) extends by homeomorphisms to ∂ X. We equip ∂ X with an Isom(X)-invariant partition [·]. Denote [A]=∪_a∈ A[a] the [·]-saturation of a subset A, which by definition is the union of all [·]-classes over A. We say that a sequence of subsets A_n in X tends to the [·]-class of a limit point ξ∈∂ X if any unbounded convergent sequence of points a_n∈ A_n tends to a point in [ξ]. A sequence of subsets A_n is called exiting if d(o,A_n)→+∞ for some (and thus any) o∈ X.
The compactification X has convergence property with respect to the partition [·] if the following two assumptions hold:
(A).
Any contracting quasi-geodesic ray γ converges to a closed [·]-class denoted by [γ^+] in ∂ X. Furthermore, if x_n∈ X is a sequence of points with lim_n→+∞d(o,π_γ(x_n))=+∞ then x_n tends to [γ^+].
(B).
Any exiting sequence γ_n of C-contracting quasi-geodesics for some C>0 has a subsequence tending to the [·]-class [ξ] of some point ξ∈∂ X.
A partition is called trivial if all [·]-classes are singletons. On the opposite, any compactification has the convergence property for the coarsest partition by asserting ∂ X as one [·]-class.
In Assumption (A), we emphasize [γ^+] to be a closed subset in ∂ X, even though the partition [·] is not assumed to be a closed relation, so not all [·]-classes are closed subsets. Assumption (B) is a simplified version of the corresponding one in <cit.>.
For a subgroup H<G, let Λ Ho denote the collection of accumulation points of Ho in ∂ X. Assumption (B) implies that [Λ Ho] is independent on the choice of basepoint o∈ X. We shall call [Λ Ho] the limit set of H. If a non-elementary subgroup H has a contracting element, then Λ Ho is the minimal H-invariant subset in ∂ X up to taking [·]-closure (see Lemma <ref> for details).
The compactifications in the first 4 items of Theorem <ref> are equipped with the trivial partition. We now elaborate on the item (5) where the partition is non-trivial.
Let Σ_g be a closed oriented surface of genus g≥2, and denote by Mod(Σ_g) the group of isotopy classes of orientation-preserving homeomorphisms of Σ_g. Let 𝒯(Σ_g) be the Teichmüller space of Σ_g, i.e. the set of isotopy classes of marked complex structures on Σ_g, equipped with Teichmüller metric d_𝒯. We consider the Thurston boundary ∂_Th𝒯(Σ_g)≅𝒫ℳℒ(Σ_g), i.e., the projective measured lamination space, of 𝒯(Σ_g).
It is observed in <cit.> that Thurston boundary has convergence property with respect to the Kaimanovich-Masur partition in <cit.>. This partition on the minimal foliations is exactly the zero intersection relation, which is however not true on the whole boundary 𝒫ℳℒ(Σ_g). Thus, it restricts trivially on uniquely ergodic points.
By the work of Eskin-Mirzakhani-Rafi <cit.>, the proper action of Mod(Σ_g) on (𝒯(Σ_g),d_𝒯) has purely exponential growth with growth rate ω=6g-6, where all pseudo-Anosov elements are contracting by Minsky <cit.>.
In <cit.>, McCarthy-Papadopoulos studied the limit set of subgroups in Mod(Σ_g) on the boundary 𝒫ℳℒ(Σ_g). If H<Mod(Σ_g) is sufficiently large (i.e. containing two independent pseudo-Anosov elements), the limit set Λ_MP H is defined as the closure of all fixed points of pseudo-Anosov elements in H. By Lemma <ref>, it is related to the above limit set Λ Ho as follows: [Λ_MP H] =[Λ Ho]. Moreover, [Λ Ho] is proper in 𝒫ℳℒ if (and only if) [Λ_MP H] is so.
We recapitulate the item (5) in Theorem <ref> as follows.
Suppose that G=Mod(Σ_g) acts on (𝒯(Σ_g),d_𝒯).
Consider any two subgroups H,K≤ G with proper limit sets [Λ Ho],[Λ Ko]≠𝒫ℳℒ, or Λ_MP H,Λ_MP K≠𝒫ℳℒ if H, K are sufficiently large. Then gr_H,K(r)≍gr_G(r)≍ e^(6g-6)r.
To conclude our discussion, we mention the following general result.
Following the classical terminology in Kleinian groups, we say that a subgroup H<G is of second type if the limit set [Λ Ho] is a proper subset of the whole limit set [Λ G]; otherwise it is called of first type.
For a proper geodesic metric space, the horofunction compactification has convergence property with respect to the finite difference partition by <cit.>.
Suppose that X is a proper geodesic metric space with the horofunction ∂_h X. Then for any two subgroups H,K≤ G of second type, there exist δ,r_0>0 so that gr_H,K(r)≥δ·gr_G(r-r_0) for all r>r_0.
In particular, if G has purely exponential growth, then gr_H,K(r)≍ω^r for some ω>1.
* For a horofunction compactification, Morse subgroups of infinite index are of second type by Corollary <ref>. Theorem <ref> thus provides a larger class of subgroups with desired double coset growth.
* The same conclusion holds for any boundary ∂ X with convergence property, where subgroups of second type are defined using ∂ X so that the limit sets are proper in [Λ G].
The proof of Theorem <ref> also implies the following combination result.
Under the assumptions of Theorem <ref>, there exist infinitely many g∈ G such that ⟨ H, gKg^-1⟩ is isomorphic to the free product H⋆ gKg^-1.
The following cases were already known in Mod(Σ_g) while our proof uses only the tool and techniques before Masur-Minksy's curve complex machinery:
* H, K are finite subgroups (<cit.>);
* H,K are handlebody subgroups (<cit.>).
§.§ Applications to generic 3-manifolds
Let V be a 3-dimensional handlebody with the boundary surface Σ_g for g≥ 2. Then the handlebody subgroup H<Mod(Σ_g) consists of the isotopy classes of orientation-preserving homeomorphisms of Σ_g extending to V. Denote by 𝒟(V) the isotopy classes of essential simple closed curves of Σ_g bounding discs in V. By Masur <cit.>, the closure of 𝒟(V) in 𝒫ℳℒ(Σ_g) is a unique closed, connected, H-invariant and minimal set with empty interior. By <cit.>, it agrees with the McCarthy-Papadopoulos limit set Λ H of the handlebody subgroup H. So Λ H≠ΛMod(Σ_g). An immediate application of Corollary <ref>, the growth of double cosets
{Hφ H:φ∈Mod(Σ_g)} is comparable with the growth of Mod(Σ_g).
For any φ∈Mod(Σ_g), gluing two copies V_1, V_2 of
V along their boundary surfaces by φ, denoted by V_1∪_φ V_2, gives a Heegaard splitting of the resulted manifold M_φ=V_1∪_φ V_2.
It is known that for any two elements ϕ,ϕ' in the same double coset Hϕ H=Hϕ' H, M_ϕ≅ M_ϕ' are homeomorphic. So a double coset corresponds to the same 3-manifold (but not vise-versa). The main application is to study the genericity of hyperbolic 3-manifolds M_φ. To make this precise, we shall introduce a geometric quantity on the collection of all orientable connected closed 3-manifolds with genus g Heegaard splitting
Δ_g={M_φ| M_φ=V_1∪_φ V_2, φ∈Mod(Σ_g)}
up to homeomorphism. Then we define a geometric complexity on Δ _g using Teichmüller metric as follows. For any M∈Δ_g, let
D(M)={φ∈Mod(Σ_g): M_φ≅ M}
where ≅ denotes the homeomorphism. Note that D(M) contains the double coset Hφ H. The geometric complexity c(M) of M is then defined to be the minimal distance of the gluing map ϕ∈ D(M):
c(M):=min{d_𝒯(o, φ o): φ∈ D(M)}
We say that a sequence of real numbers a_n converges to a exponentially quick if |a_n-a|≤ cε^n for some c>0, ε∈ (0,1) and for any n≥ 1. A subset Γ in Δ_g is said exponentially generic with respect to geometric complexity if the following converges exponentially quick:
♯{M∈Γ: c(M)≤ n}/♯{M∈Δ_g: c(M)≤ n}→ 1
Answering positively a question of Thurston, Maher <cit.> proved that a random Heegaard splitting gives raise to a hyperbolic 3-manifold V_1 ∪_ϕ V_2, where ϕ∈Mod(Σ_g) is sampled under a finite supported random walk. See Lubotzky-Maher-Wu <cit.> and Maher-Schleimer <cit.> for refining statements. In <cit.>, Maher asked whether the proportion of orbit points in the ball of radius n in the Teichmüller metric which
give rise to hyperbolic manifolds when used as Heegaard splitting gluing maps tends to 1 as n→∞. Our following theorem answers his conjecture in the positive. We emphasize that however, our model counts the homeomorphic types of 3-manifolds rather than the isotopy classes of Heegaard splittings, so gives a more accurate picture of random 3-manifolds.
Let
Γ_g={M_φ :M_φ is hyperbolic for some φ∈Mod(Σ_g)}.
Then Γ_g is exponential generic in Δ_g with respect to the geometric complexity c. Moreover, we have
♯{M∈Δ_g: c(M)≤ n}≍e^(6g-6)n
where the implicit constant might depend on o∈T(Σ_g).
It is worth noting that in <cit.>, Qiu, Guo, Zhang and the third author proved that given any g≥ 2, there are infinitely many non-homeomorphic hyperbolic 3-manifolds, which admit genus g Heeegaard splittings. So our result is a refined version of their result. The crucial ingredient to theorem <ref> is that the Hempel distance d_H(V∪_φ V), defined by Hempel <cit.>, has positive drift with exponential decay.
There exists κ∈(0,1) so that the following sets
ℋ={Hφ H:d_H(V_1∪_φ V_2)≥κ· d_𝒯(o,Hφ Ho)}
and
ℋ'={M_φ:d_H(V_1∪_φ V_2)≥κ· d_𝒯(o,Hφ Ho)}
are exponentially generic in {Hφ H:φ∈Mod(Σ_g)}, and in Δ_g respectively.
§.§ Counting common perpendiculars and further questions
By definition, the double coset growth gr_H, K(r) counts the number of subsets ϕ Ko (ϕ∈ G) within a distance at most r to Ho. If π: X → X/G is the covering map (e.g. when the action is free), then gr_H, K(r) counts common perpendiculars between subsets A=π(Ho) and B=π(Ko). Considering the cover Y=X/H → X/G associated to H, gr_H, K(r) amounts to counting how many subsets of the form π(ϕ Ko) in the r-ball centered at Ho∈ Y.
As earlier-mentioned, this geometric scenario has been investigated in Parkkonen-Paulin's work <cit.>, who obtained the number of common perpendiculars between two properly immersed closed convex sub-manifolds A, B in a negatively pinched Riemannian manifold M. If A,B are compact, then their fundamental subgroups H=π_1(A), K=π_1(B) are convex-cocompact in π_1(M). In this case, the coarse double coset growth for H, K by Theorem <ref> follows from <cit.>.
It must go beyond the scope of this paper to derive a precise formula for the double coset growth function. However, a precise asymptotic is expected in various classes of groups as demonstrated in <cit.>. In view of the formula (<ref>), we would like to ask the following question.
Let H⊂Mod(Σ_g) be the handlebody group. Does there exist a constant C so that the following holds:
∀ r>0: gr_H,H(r) ∼ C ·e^(6g-6)r
for any point o∈T(Σ_g)?
We remark that the lattice counting for Teichmüller metric (i.e. H is trivial) has the above precise form, where the constant C actually does not depend on the basepoint o (<cit.>).
The paper is organized as follows. In Section §<ref>, we recall the basic results about contracting elements, statistically convex-cocompact actions, and the convergence compactification and derive several preparatory results for double coset counting. Sections <ref> and <ref> are devoted to the proofs of Theorem <ref> and Theorem <ref>, where the latter is modeled on the former with necessary changes presented in §<ref>. The final §<ref> presents the application, Theorem <ref>, to the genericity of 3-manifolds.
§.§ Acknowledgments
S. H. is supported by the Special Research Assistant Project at Chinese Academy of Sciences (E2559303). W. Y. is supported by National Key R & D Program of China (SQ2020YFA070059).
Y.Z. is supported by NSFC No.12131009 and in part by Science and Technology Commission of Shanghai Municipality (No. 22DZ2229014).
§ PRELIMINARIES
§.§ Contracting element
Suppose that (X,d) is a proper geodesic metric space. Then for any x,y∈ X, we denote by [x,y] a choice of geodesic in X from x to y. The (closed) r-neighborhood of a subset A⊆ X for r≥ 0 is denoted by N_r(A).
Given a point x∈ X and a closed subset A⊆ X, let π_A(x) be the shortest point projection of x to A, i.e., π_A(x) is the set of point a∈ A such that d(x,a)=d(x,A). The projection of a subset Y⊆ X to A is then π_A(Y):=∪_y∈ Yπ_A(y).
Suppose that a group G acts properly on (X,d). Then for a fixed basepoint o∈ G as before, we similarly denoted by 𝒩_r(A)={g∈ G:d_G(g,A)<r} the r-neighborhood of a subset A⊆ G w.r.t. the pseudo-metric d_G.
Unless otherwise specified, the path γ: I⊆ [-∞,+∞]→ X is equipped with arc-length parametrization, and ℓ(γ)∈ [0, ∞] denotes the length of γ.
We call a path γ is a λ-quasi-geodesic for λ≥1 if for any rectifiable subpath [γ(s),γ(t)]_γ⊆γ, we have
ℓ([γ(s),γ(t)]_γ) ≤λ d(γ(s),γ(t))+λ.
A subset A⊆ X is called C-contracting for C≥0 if for any geodesic γ with d(γ,A)≥ C, we have diam(π_A(γ))≤ C.
An element g∈ G is called contracting if for some basepoint o∈ X, the ⟨ g⟩-invariant concatenated path ∪_n∈ℤg^n([o,go])
is a C-contracting quasi-geodesic for some C>0.
A subset A⊆ X is η-Morse for a function η:ℝ_≥0→ℝ_≥0 if every λ-quasi-geodesic with two endpoints in A is contained in the η(λ)-neighborhood of A.
A subgroup H≤ G is Morse if the subset H o for a basepoint o∈ X is η-Morse for some function η:ℝ_≥0→ℝ_≥0.
We state some basic properties of contracting subset, whose proof is left to the interested reader.
* Let A be a C-contracting subset. If diam(π_A(x)∪π_A(y))>C then
d(π_A(x),[x,y]),d(π_A(y),[x,y])≤ 2C.
* If a subset A has a Hausdorff distance at most D to a C-contracting subset B, then A is C'-contracting for some C' depending on C and D.
* If α is a C-contracting λ-quasi-geodesic, then any subpath of α and any geodesic with endpoints in α are C'-contracting for some C'=C'(λ, C).
Hence, the definition of a contracting element is independent of the choice of basepoint.
By <cit.>, each contracting element g is contained in a maximal elementary group E(g) defined as follows:
E(g)={h∈ G:∃ r>0,h⟨ g⟩ o⊆ N_r(⟨ g⟩ o) and ⟨ g⟩ o⊆ N_r(h⟨ g⟩ o)},
and the index [E(g):⟨ g⟩] is finite.
That is to say, E(g) is a virtually cyclic subgroup, and any such group containing g is a subgroup of E(g). The quasi-geodesic denoted by Ax(f):=E(f)o is called quasi-axis for g (depending on o).
A collection 𝔸 of subsets has bounded intersection if for any R>0 and A A'∈𝔸, the diameter of N_R(A)∩ N_R(A') is bounded by a constant depending only on R. If 𝔸 consists of C-contracting subsets, this is equivalent to the bounded projection property: for any A A'∈𝔸, π_A(A') has diameter bounded by a uniform constant.
Two contracting elements g,h∈ G are called independent if the collection {fAx(g), fAx(h): f∈ G} of their axis and translates has bounded intersection. In algebraic terms, it is amount to say that g∉ aE(h)a^-1 for any a∈ G; or equivalently, E(g)≠ aE(h)a^-1 for any a∈ G (see <cit.>). A non-elementary group with a contracting element actually contains infinitely many pairwise independent contracting elements.
Let 𝔸 be a contracting system with bounded intersection property. The following notion of an admissible path allows to construct a quasi-geodesic by concatenating geodesics via 𝔸.
[Admissible Path] Given L,τ≥0, a path γ is called (L,τ)-admissible in X, if γ is a concatenation of geodesics p_0q_1p_1⋯ q_np_n (n∈ℕ), where the two endpoints of each p_i lie in some A_i∈𝔸, and the following Long Local and Bounded Projection properties hold:
(LL) Each p_i for 1≤ i< n has length bigger than L, and p_0,p_n could be trivial;
(BP) For each A_i, we have max{diam(π_A_i(q_i)),diam(π_A_i(q_i+1))}≤τ, where q_0:=γ_- and q_n+1:=γ_+ by convention.
The collection {A_i: 1≤ i≤ n} are refereed as contracting subsets associated to the admissible path.
The paths q_i could be allowed to be trivial, so (BP) condition is automatically satisfied. It will be useful to note that admissible paths could be concatenated as follows: Let p_0q_1p_1⋯ q_np_n and p_0'q_1'p_1'⋯ q_n'p_n' be (L,τ)-admissible. If p_n=p_0', then the concatenation (p_0q_1p_1⋯ q_np_n)· (q_1'p_1'⋯ q_n'p_n') has a natural (L,τ)-admissible structure.
The basic fact is that a “long" admissible path is a quasi-geodesic.
<cit.>
Let C be the contraction constant of 𝔸. For any τ>0, there are constants L=L(C,τ)>0,Λ=Λ(C,τ)>0 such that any (L,τ)-admissible path is a (Λ,Λ)-quasi-geodesic.
Let F be a set of three pairwise independent contracting elements.
Denote 𝔸={gAx(f):g∈ G,f∈ F} the collection of their axis, i.e. C-contracting subsets for some C>0. Then 𝔸 is a contracting system with bounded intersection (see <cit.>). The following lemma <cit.> is useful to build admissible quasi-geodesics.
There exist τ_0>0 depending on 𝔸 and L=L(C,τ)>0 for any τ>τ_0 with the following property.
Choose any set F={f_1,f_2,f_3} with f_i∈ E(f_i) and |f_i|>L. For any two elements g_1,g_2∈ G, there exists f∈ F so that the path labelled by g_1fg_2 is an (L,τ)-admissible path, where the associated contracting set is given by g_1(f).
The following result proved in <cit.> gives a procedure to construct contracting elements.
Suppose that a group G acts properly on (X,d) with a contracting element. Then there exist a set F⊆ G of three contracting elements and λ>0 with the following property.
For any g∈ G, there exists f∈ F such that the bi-infinite concatenated path ∪_n∈ℤ(gf)^n([o,gfo]) is a contracting λ-quasi-geodesic. In particular, gf is a contracting element.
Following <cit.>, a geodesic α⊆ X contains an (ϵ,f)-barrier t∈ G if the following holds
d(to,α)≤ϵ, d(tfo,α)≤ϵ.
Otherwise, α is called (ϵ,f)-barrier-free.
An element g∈ G is called (ϵ,M,f)-barrier-free if there exists a geodesic segment from a point in B(o,M) to a point in B(go, M) which is (ϵ,f)-barrier-free.
Let f be a contracting element with a C-contracting axis Ax(f). There exist ϵ=ϵ(C,f), τ=τ(f) with the following property. If diam (π_Ax(f)(α))> τ for a geodesic segment α, then α contains an (ϵ,f)-barrier.
Let x∈π_Ax(f)(α_-) and y∈π_Ax(f)(α_+) so that d(x,y)=diam (π_Ax(f)(α)). As Ax(f) is C-contracting, we have that if d(x,y)> C, then d(x, α), d(y, α)≤ 2C. Recall that a geodesic with
endpoints in a contracting quasi-geodesic is contracting (see Lemma <ref>). The geodesic [x,y] is contracting. Note that ∪_n∈ℤ f^n[o,fo] is a quasi-geodesic, which is contained in a fixed neighborhood of Ax(f) by Morse property of contracting set. Thus, if d(x,y)≥τ≫ d(o,fo) is large enough relative to d(o,fo), there exists a constant C_0 such that the C_0-neighborhood of [x,y] contains a segment labeled by f. That is to say, [x,y] contains (C_0,f)-barrier. Choose u,v∈α so that d(x,u), d(y,v)≤ 2C. By Morse property again, [x,y] is contained in a C_1-neighborhood of [u,v] for some C_1=C_1(C). Thus, α contains an (ϵ,f)-barrier, where ϵ:=C_0+C_1. The lemma is proved.
Assume that H is a Morse subgroup of infinite index. Then for any ϵ≥0, there exists a contracting element g∈ G so that for any element h∈ H, [o,ho] contains no (ϵ, g)-barrier.
By Lemma <ref>, for any g_0∈ G there exists f∈ F so that g:=g_0f is contracting. Fix any ϵ>0 and set ϵ_0=max{d(o,fo): f∈ F}+ϵ.
As H is of infinite index in G, it holds that G∖ S· H· S is infinite for any finite set S. See the proof of <cit.> for details.
Now, assume that Ho is η-Morse for some function η.
By the proper action, the set S:={g∈ G: d(o,go)≤ϵ_0+η(1)} is finite. Choose any element g_0∈ G∖ S· H· S. We claim that for any element h∈ H, [o,ho] contains
no (ϵ_0, g_0)-barrier. Indeed, if not, there exists t∈ G such that d(to, [o,ho]),d(tg_0o, [o,ho])≤ϵ_0. As [o, ho]⊆ N_η(1)(Ho), we have to, tg_0o∈ N_ϵ_0(Ho) and then s_1=t^-1h_1, s_2=h_2^-1(tg_0)∈ S for some h_1, h_2∈ H. This implies g_0=s_1(h_1^-1h_2)s_2∈ S· H· S: we get a contradiction. Our claim thus follows.
We now prove that [o,ho] contains no (ϵ, g)-barrier. Suppose to the contrary that there exists t∈ G such that d(to, [o,ho]), d(tgo, [o,ho])≤ϵ, where g=g_0f. As f∈ F we have d(to, [o,ho]), d(tg_0o, [o,ho])≤ϵ_0: [o,ho] contains (ϵ_0, g_0)-barrier. This contradicts the above choice of g_0. The proof is complete.
As a corollary, we obtain the following key estimate.
Let H, K be Morse subgroups of infinite index. Then there exist a contracting element g∈ G and τ>0 so that for any h∈ H∪ K, the shortest projection to Ax(g) is uniformly bounded:
diam(π_Ax(g)([o,ho]))≤τ
and ⟨ g⟩∩ (H∪ K)={1}.
As G∖ S· (H∪ K)· S is infinite for any finite set S, the proof of Lemma <ref> gives the same contracting element for H and K. The uniform bounded projection follows by Lemma <ref>. This also implies ⟨ g⟩∩ (H∪ K)={1}. Otherwise, if some power of g is contained in H (or K), then E(g)o has infinite intersection with Ho (or Ko). This contradicts uniform bounded projection as above.
§.§ Counting double cosets
Recall that a group G acts properly on a proper metric space (X,d). A subset A⊆ G is called exponentially generic if the proportion of A in the ball B_G(r) tends to 1 exponentially quick: there exists ε∈ (0,1) so that
∀ r>0: |♯(A∩ B_G(r))/♯ B_G(r)-1|≤ε^r.
On the opposite, a subset B⊆ G is called exponentially negligible if its complement is exponentially generic. By <cit.>, the infinite index Morse subgroups H,K in Theorem <ref> are exponentially negligible.
Let H,K<G be any two subgroups. Consider the following surjective map
Π: G⟶𝔻(H,K):={HgK: g∈ G}
g⟼ HgK.
Then the preimage under Π of B_H,K(r) is exactly B_G(r).
We first observe that the image of an exponentially generic set of G under Π is also exponentially generic in {HgK:g∈ G}.
Suppose that G acts properly on (X,d) and H,K<G are subgroups so that gr_H,K(r)≥δgr_G(r) for all r≥ 1 and some δ>0.
If a subset A⊆ G is exponentially generic in G, then the image Π(A) is exponentially generic in {HgK:g∈ G}.
Let Ã:=Π^-1(A)=∪_a∈ AHaK be the collection of double cosets supported by A and Ã^c=G∖Ã be its complement in G. Since A⊆ G is exponentially generic, so does Ã, hence there exists ε∈(0,1) and c>0 so that
♯(Ã^c∩ B_G(r))≤ cε^r ·♯ B_G(r)
By definition, B_H,K(r) consists of double cosets HgK with |g|≤ r, so B_H,K(r)=Π(B_G(r)),
therefore
♯ (Π(Ã^c)∩ B_H,K(r))=♯Π(Ã^c∩ B_G(r))≤♯(Ã^c∩ B_G(r)).
By assumption, ♯ B_H,K(r)≥δ♯ B_G(r). Consequently,
♯(Π(Ã^c)∩ B_H,K(r))/♯ B_H,K(r)≤cε^r ·♯ B_G(r)/δ♯ B_G(r)≤c/δε^r.
The result follows by noticing that Π(A)=Π(Ã) and Π(G)∖Π(Ã)=Π(Ã^c).
We now consider the case that H is a normal subgroup of G. If G is equipped with left invariant proper invariant pseudo-metric d, the it induces a left invariant proper pseudo-metric d̅ on the quotient Γ:=G/H defined as follows
d̅(gH,g'H):=inf{d(1, g^-1hg'): h∈ H}
The following lemma is elementary, whose proof is straightforward by unravelling definitions.
The double coset growth gr_H,H(r) of H is the same as the growth function gr_Γ(r) for d̅.
§.§ Statistically convex-compact actions
Given constants 0≤ M_1≤ M_2, let 𝒪_M_1,M_2 be the set of element g∈ G such that there exists some geodesic γ between N_M_2(o) and N_M_2(go) with the property that the interior of γ lies outside N_M_1(Go).
[SCC Action]
If there exist positive constants M_1,M_2>0 such that ω(𝒪_M_1,M_2)<ω(G)<∞, then the proper action of G on Y is called statistically convex-cocompact (SCC).
The idea to define the set 𝒪_M_1,M_2 is to look at the action of the fundamental group of a finite volume Hadamard manifold on its universal cover. It is then easy to see that for appropriate constants M_1, M_2>0, the set 𝒪_M_1,M_2 coincides with the union of cusp subgroups up to a finite Hausdorff distance. The assumption in SCC actions was called a parabolic gap condition by Dal'bo, Otal and Peigné in <cit.>.
Given ϵ,M>0 and any f∈ G, let 𝒱_ϵ,M,f be the collection of all (ϵ,M,f)-barrier-free elements of G.
Let 𝒱_ϵ,M,f^θ,L be the set of all elements g ∈ G with the following properties:
* there exists a set 𝕂_0 of disjoint connected subintervals α⊆[o,g o] with endpoints α_-,α_+∈ N_M(Go) so that every α is (ϵ,f)-barrier-free.
* If 𝕂={α∈𝕂_0:ℓ(α)≥ L} consists of those intervals in 𝕂_0 with length at least L, then
∑_α∈𝕂ℓ(α)≥θ d(o,go)
The following results will be key in next sections.
<cit.>, <cit.>
Assume that a non-elementary group G admits a SCC action on a proper geodesic space (Y,d) with a contracting element. Let M_0 be the constant in the definition of SCC action. Then
* G has purely exponential growth.
* For any M>M_0, there exists ϵ=ϵ(M)>0 such that 𝒱_ϵ,M,f is exponentially negligible for any f∈ G.
* For any 0<θ≤ 1 there exists L=L(θ)>0, so that the set 𝒱_ϵ,M,f^θ,L is growth tight for any f∈ G.
In other words, the complement subset
𝒲_ϵ,M,f^θ,L:=G∖𝒱_ϵ,M,f^θ,L
is exponentially generic.
§.§ Convergence boundary
Let X be compactified by the boundary ∂ X with the convergence property stated as in Definition <ref>.
Let o∈ X be a fixed basepoint. For a subgroup H<G, let Λ Ho denote the set of accumulation points of Ho in the boundary ∂ X. The limit set [Λ Ho] of a subgroup H is defined as the [·]-locus of Λ Ho, which by definition is the union of the [·]-classes of Λ Ho. The set [Λ Ho] is independent of the choice of basepoint o∈ X, as an exiting sequence of segments with uniformly bounded length, which are uniformly contracting, sub-converges to the same [·]-class by Assumption (B).
Let f be a contracting element so by definition, γ=∪_n∈ℤf^n([o,fo]) is a contracting quasi-geodesic. By Assumption (A), we denote [f^+]:=[γ^+] the [·]-class of the limit point for the positive half-ray of γ (i.e. where the indices are over n>0). Similarly, [f^-]:=[γ^-]. It is independent of the choice of basepoint by Lemma <ref>.
A boundary point ξ∈∂ X is called non-pinched if for any two sequences x_n→ [ξ] and y_n→ [ξ], the sequence of geodesics [x_n, y_n] is exiting, i.e.: misses any fixed compact set for all n≫ 0. A contracting element f is called non-pinched if their fixed points [f^±] are both non-pinched. Consequently, [f^-]∩ [f^+]=∅. It is proved in <cit.> that two non-pinched elements have either the disjoint or the same fixed points.
The following result is essentially proved in <cit.>. For the convenience of the reader, we briefly describe the main argument, which is useful to understand the next results.
Let f be a non-pinched contracting element with fixed points f^-, f^+ and with a C-contracting quasi-axis γ for C≥ 0. Then there exist a constant D>0 and a set-valued map π_γ: ∂ X∖ [f^±]→γ with the following properties for any ξη∈∂ X∖ [f^±]:
* π_γ(ξ) has diameter at most D.
* π_γ(f^nξ) is contained in a D-neighborhood of f^nπ_γ(ξ) for any n∈ℤ.
* Assume that d(π_γ(ξ), π_γ(η))≥ 4D. Let x_n∈ X→ξ and y_n∈ X→η. Then [x_n,y_n] intersects the corresponding D-neighborhood of π_γ(ξ) and π_γ(η).
Observe that for any ξ∈∂ X∖ [f^±] and x_n→ξ, the set {π_γ(x_n): n≥ 1} is bounded: indeed, if not, Assumption (A) implies that (a subsequence of) x_n tends to [f^±], contradicting ξ∉ [f^±]. Take a compact neighborhood K of ξ with K⊆ X∪∂ X∖ [f^±]. Similarly, we can prove π(K):={π_γ(x): x∈ K∩ X} is bounded. It is clear that π(K_1)⊂π(K_2) for any K_1⊆ K_2. We define π_γ(ξ) to be the intersection of π(K) where K is taken over the compact neighborhood of ξ in X∪∂ X. Equivalently, it is the countable intersection of a compact neighborhood basis K_n of ξ. It is clear that π_γ(ξ) is non-empty, as we can take a sequence x_n∈ K_n→ξ. It remains to show that π_γ(ξ) has uniform diameter independent of ξ.
Let L be a fundamental domain for the action of ⟨ f⟩ on γ. Denote by K the set of points x∈ X so that π_γ(x)∩ L∅. Let ∂ K=K∖ X, where K is the topological closure of K. By a similar argument as above, ∂ K is disjoint with [f^±], so is a compact subset in ∂ X∖ [f^±].
By construction, as ⟨ f⟩ acts co-compactly on γ, {f^n∂ K: n∈ℤ} is a uniformly locally finite cover of ∂ X∖ [f^±]. Thus, any ξ∈∂ X is contained in the union of a uniformly number of members, which is compact. This implies π_γ(ξ) has uniform diameter independent of ξ.
We also need the following result which was proven in <cit.> for the case H=G. However, the proof does not require the element g to be in H.
<cit.>
Fix a contracting element g∈ G and a basepoint o∈ X. If H is any subgroup of G, then [Λ Ho]=[Hξ] for any ξ∈ [g^±]∩Λ Ho.
Assume that H<G is a subgroup with [Λ H]⊊ [Λ G]. Then for any o∈ X, there exists a contracting element f in G so that [f^+] are outside Λ Ho. If G contains a non-pinched contracting element, then f can be chosen non-pinched.
Suppose to the contrary that all (non-pinched) contracting elements in G have their fixed points in [Λ H]. Let us fix a (non-pinched) contracting element g_0∈ G and choose ξ∈ [g_0^+]∩Λ Ho∅. Note that gξ for each g∈ G are the fixed points of (non-pinched) contracting elements gg_0g^-1, so Gξ⊆Λ Ho. It thus follows by Lemma <ref> that [Λ Go]⊆ [Λ Ho]. This is a contradiction, so there exists f∈ G with [f^+] outside Λ Ho.
The above notion of the limit set can be further characterized if H contains a non-pinched contracting element. Inspired by the work <cit.>, let Λ H be the closure of the [·]-classes of fixed points of all non-pinched contracting elements in H. By <cit.>, these two notions of limit sets coincide up to taking [·]-closure: [Λ Ho]=[Λ H]. Moreover, it has the following desired property.
Assume that a non-elementary subgroup H contains a non-pinched contracting element. If Λ is an H-invariant closed subset in ∂ X, then Λ H⊆ [Λ]. In particular, if the partition [·] is trivial, then the limit set Λ H=[Λ H] is the minimal H-invariant closed subset in ∂ X.
We only need to show that Λ H is minimal: if Λ is any H-invariant closed subset, then [Λ H]⊆ [Λ]. Let f be a non-pinched contracting element in H. By assumption, H is non-elementary, so the conjugates of f in H are all non-pinched contracting elements with disjoint fixed points. Hence, Λ contains at least three [·]-classes of points, otherwise this contradicts the North-South dynamics of two non-pinched f_1, f_2 with disjoint fixed points. Choosing a point p∈Λ different from [f^±], we obtain f^np→ [f^+], hence f^+∈ [Λ] as Λ is a closed subset. This holds for every non-pinched f, thus [Λ H]⊆ [Λ] follows.
The following result refines the statement of <cit.> with a similar proof.
Let h, k∈ G be two non-pinched contracting elements so that [h^+]∩ [k^-]=∅. Let K be a closed subset in ∂ X so that K∩ ([h^+]∪ [k^-])=∅. Then for any n≫0, f_n:=h^nk^n are contracting elements in G so that [f_n^±]∩ K=∅, and f_n^-→ [k^-], and f_n^+→ [h^+].
By <cit.>, h and k have disjoint fixed points, so the axes (h)=E(h)o, (k)=E(k)o have τ-bounded projection for some τ>0. Set L:=d(o,h^no). Following the proof of <cit.>, consider the (L, τ)-admissible path
γ:=∪_i∈ℤ(h^nk^n)^i([o,h^no]· h^n[o,k^no])
where the associated contracting sets are given by {(h^nk^n)^ih^n(k): i∈ℤ}. For n≫ 0, γ is a contracting quasi-geodesic, so f_n:=h^nk^n is contracting. By Assumption (A), the contracting quasi-geodesic rays γ^+=∪_n∈ℕ(h^nk^n)^i([o,h^no]· h^n[o,k^no]) and γ^-=∪_n∈ℕ (h^nk^n)^-i([k^-no,o]· k^-n[o,h^-no]) tend to [f_n^+] and [f_n^-] respectively, which are closed [·]-classes.
It remains to show that [f_n^+]∩ K=∅. To this end, choose an open neighborhood U of [h^+] so that U∩ K=∅. We are going to prove [f_n^+]⊂ U for n≫ 0. The proof for the case [f_n^-]∩ K=∅ is similar by considering γ^- and its projection to (k).
Observe that [f_n^±]∩ [h^±]=∅. Indeed, assume to the contrary that ξ∈ [f_n^+]∩ [h^-] for definiteness. Note that γ is a contracting quasi-geodesic with a common intersection [o,h^no] with (h). So for any point z∈γ^+, we see that the shortest projection of z to (h) is uniformly close to h^no. Similarly, any z∈γ^- projects into a uniform neighborhood of o. If x_m∈γ→ [ξ] and y_m∈(h)→ [ξ], the contracting property implies that [x_m,y_m] intersects a uniform neighbourhood of [o,h^no] and thus a bounded radius ball centered at o. This is a contradiction as [h^+] is non-pinched by assumption. Hence, the claim is proved.
As n→∞, f_n^+ tends to [h^+]. By Lemma <ref>, for sufficiently large n, [f_n^+] is contained in U. This shows
[f_n^+]∩ K=∅. The lemma is proved.
We now arrive to the main conclusion of the above discussion, which will be used in the proof of Theorem <ref>.
Assume that a non-elementary subgroup H<G contains a non-pinched contracting element so that [Λ H] is a proper subset of [Λ G]. Then for any o∈ X, there exist infinitely many contracting elements in G whose fixed points are pairwise distinct and outside Λ Ho.
Let f∈ G be a non-pinched contracting element provided by Corollary <ref> so that [f^+]∩Λ Ho=∅ and hence [f^+]∩ [Λ H]=∅. As H is an infinite group preserving [Λ H], we see that h[f^+] is outside Λ Ho for any h∈ H.
Let us choose two non-pinched contracting elements h, k∈ G
with h^-,k^+∉ [Λ H] and [h^+]∩ [k^-]=∅. By Lemma <ref>, f_n:=h^nk^n is a sequence of contracting elements with f_n^-→ [h^-] and f_n^+→ [k^+], and [f_n^±] lies outside Λ Ho for all n≫ 0. This completes the proof.
At last, we compare the Morse subgroups and subgroups of second kind with respect to the horofunction boundary.
By <cit.>, the horofunction boundary is a convergence boundary with finite difference partition, where all boundary points are non-pinched. This implies that every contracting element is non-pinched on horofunction boundary.
Suppose that ∂ X is the horofunction boundary of X. Let H be a Morse subgroup of infinite index. Then [Λ H] is a proper subset of [Λ G].
By Lemma <ref>, there exists a non-pinched contracting element g∈ G such that the projection of Ho to (g) is a bounded set, denoted by K. We claim that [Λ Ho] is disjoint with [g^±]. If not, let p∈Λ Ho∩ [g^±], so h_no→ p for some sequence h_n∈ H and g_no→ [p] for some sequence g_n∈⟨ g⟩. As [p] is non-pinched, [h_no, g_no] is exiting: d(o,[h_no,g_no])→∞. On the other hand, the C-contracting property of (g) shows that [h_no, g_no] intersects the C-neighborhood of K. This is a contradiction, so [Λ H]⊊ [Λ G] follows.
§ DOUBLE COSET GROWTH FOR MORSE SUBGROUPS
This section is devoted to the proof of Theorem <ref>.
Suppose that H and K are Morse subgroups in G of infinite index. Let g∈ G be a contracting element and τ>0 given by Corollary <ref> such that
∀ h∈ H∪ K: diam(π_Ax(g)([o,ho]))≤τ
As G contains infinitely many pairwise independent contracting elements, let us fix a set F of three pairwise independent contracting elements which are all independent with g. Form the C-contracting system 𝔸={h(f): f∈ F∪{g}, h∈ G} for some C>0, which consists of all translated axis of F∪{g} under G. Let L,τ, Λ=Λ(C,τ) be satisfying Lemma <ref> for this F.
For a sequence of elements {g_i}_1≤ i≤ n in G, we define the path labelled by g_1g_2⋯ g_n as the concatenation of geodesic segments
[o,g_1o]· g_1[o,g_2o]·⋯ (g_1⋯ g_n-1)[o,g_no].
There exists M>0 such that the following holds.
For any t∈ G, there exist a, b∈ F such that the path
labelled by the word s:=g^M· 1· a· t· b· 1· g^M is (L,τ)-admissible, where the contracting subsets are provided by the appropriately translated axes of g, a and b.
By Lemma <ref>, for the pairs (g^M, t) and (t, g^M), there exist a, b∈ F such that g^M· a· t and t· b · g^M both label (L, τ)-admissible paths where L=d(o, g^Mo). By Definition <ref> of admissible paths, the concatenation of these two paths is still an (L, τ)-admissible path. The proof is complete.
Set r_0:=2M |g|+2max{|f|: f∈ F} and m:=♯ B_G(r-r_0)=gr_G(r-r_0). Let us list the elements
B_G(r-r_0)={t_1,t_2,⋯,t_m}.
By Lemma <ref>, for 1≤ i≤ m, the path labelled by the word
s_i:=g^Ma_it_ib_ig^M
for some a_i,b_i∈ F is an (L,τ)-admissible path. Thus,
||s_i|-|t_i||≤ r_0 for each s_i.
Recall that 𝔻(H,K)={HgK: g∈ G} is the collection of double cosets about H and K.
The map
Π: G→𝔻(H,K)
g↦ HgK
restricted on the ball B_G(r-r_0) is injective.
If Hs_iK=Hs_jK for i≠ j, then there exist h∈ H,k∈ K so that s_j=hs_ik. As s_i s_j, at least one of h, k is nontrivial. For definiteness, assume that h 1 and the other case k 1 is symmetric. Then the following word
W:=g^-Mb_j^-1t_j^-1a_j^-1· g^-Mhg^M· a_it_ib_i· g^Mk
represents the trivial element 1=s_j^-1hs_ik.
The goal is to prove that the path labelled by W is an (L,τ)-admissible path.
By Lemma <ref>, the paths labeled by s_j^-1, s_i and thus their subpaths are already (L,τ)-admissible paths. By Remark <ref>, we are left to check that g^-Mhg^M and g^Mk label (L,τ)-admissible paths, so their concatenation gives the desired (L,τ)-admissible path.
By Corollary <ref>, we have
max{diam(π_(g)[o,ho]),diam(π_(g)[o,h^-1o])}≤τ.
As H∩ E(g)={1}, we have hE(g)≠ E(g), and hence the word g^-Mhg^M labels an (L,τ)-admissible path. Similarly, we know that g^Mk labels an (L,τ)-admissible path. Hence, the above word W labels an (L,τ)-admissible path, which is thus a Λ-quasi-geodesic by Lemma <ref> for some Λ>1. By construction, this quasi-geodesic has length bounded below by 4|g^M|. On the other hand, the two endpoints are the same as W represents the trivial element. As a result,
4|g^M|≤Λ
contradicting the choice of M. Therefore, the map Π is injective.
Thus, the first statement of the theorem follows. If G has purely exponential growth, then ♯ B_G(r) ≍ω^r for any r>0, the “moreover" statement follows.
§ DOUBLE COSET GROWTH FOR SUBGROUPS OF SECOND TYPE
The goal of this section is to prove Theorem <ref>. We proceed as the proof of Theorem <ref>, where Corollary <ref> is replaced by the following lemma.
Suppose that H is a subgroup of G, and g is a contracting element with [g^-],[g^+] outside Λ H o for some o∈ X.
Then there exists a constant D>0 so that for any h∈ H, we have
diam(π_(g)([o,ho]))≤ D.
In particular, the intersection H∩ E(g) is finite.
Recall that (g)=E(g)o is a C-contracting quasi-axis of g for some C>0.
Suppose to the contrary that there exists a sequence of h_i∈ H so that
lim_i→+∞diam(π_(g)([o,h_io]))=+∞.
Denote γ_i:=[o,h_io]. Let y_i be the exit point of γ_i in N_C((g)): d(y_i,(g))=C and d(y,(g))>C for any y∈(y_i,h_io]_γ_i. The C-contracting property thus implies
diam(π_(g)([y_i,h_io]))≤ C
so
d(y_i,π_(g)(h_io))≤ d(y_i,(g))+diam(π_(g)([y_i,h_io]))≤2C.
On the one hand, as
d(o,y_i)=diam(γ_i∩ N_C((g)))→+∞, the assumption (A) implies that h_io tends to [g^+]. On the other hand, the accumulation points of h_io is contained in Λ Ho. This contradicts [g^+]∩Λ Ho=∅. The proof is then complete.
Since [Λ H],[Λ K] are proper subsets of [Λ G], there exist a contracting element g with g^-,g^+∉ [Λ H]∪ [Λ K] by Corollary <ref>.
By Lemma <ref> replacing Corollary <ref>, we again have the bounded projection bound as (<ref>):
∀ h∈ H∪ K: diam(π_Ax(g)([o,ho]))≤τ
The rest of proof proceeds exactly as the proof of Theorem <ref>, which does not involve the Morseness of H,K but the above bounded projection properties.
The goal is to prove that the natural epimorphism H⋆ gKg^-1→Γ where Γ is generated by H, gKg^-1 is injective. To that end, consider an alternating word W=h_1 gk_1g^-1⋯ h_n gk_ng^-1, where h_i∈ H∖ 1, k_i∈ K∖ 1. By (<ref>) and setting L=d(o,go), the word W labels an (L, τ)-admissible path γ, which is thus a Λ-quasi-geodesic by Lemma <ref> for some Λ>1 independent of L. By choosing L large enough via high power of g, we see that the two endpoints of γ are distinct. Hence, W is non-trivial, completing the proof of the injectivity.
§ GENERIC 3-MANIFOLDS ARE HYPERBOLIC: THEOREM <REF>
Suppose that the mapping class group Mod(Σ_g) acts on the Teichmüller space (𝒯(Σ_g),d_𝒯).
Let H<Mod(Σ_g) be the handlebody subgroup, so by <cit.>,
Λ H≠ΛMod(Σ_g) as described in Subsection <ref>. Hence gr_H,H(r)≥δgr_G(r) for any r≥ 1 by Corollary <ref>.
§.§ Shadowing maps
Let 𝒞(Σ_g) be the curve graph of Σ_g where the vertex set consists of (isotopy classes of) essential simple closed curves and two vertices are adjacent if they can be represented as disjoint essential simple closed curves.
In proving the hyperbolicity of curve graph, Masur-Minsky <cit.> built a Lipschitz map up to bounded error (called coarsely Lipschitz)
ϕ_1:(𝒯,d_𝒯)⟶ (𝒞 (Σ_g), d_𝒞)
which sends a marked hyperbolic metric to one of the systoles of Σ_g (i.e. the shortest simple closed curves). The crucial property of this map is that any Teichmüller geodesic is sent to a un-parameterized quasi-geodesic in the curve graph. It is known that a pseudo-Anosov element f acts by translation on a contracting Teichmüller geodesic γ in 𝒯(Σ_g) by <cit.>, and acts loxodromically on the curve graph by <cit.>. Thus, the image of γ under ϕ is a re-parametrized quasi-geodesic γ_1.
Let 𝒟(V) be the collection of essential simple closed curves on ∂ V=Σ_g which bound discs in the handlebody V. By Masur-Minsky <cit.>, the disk set 𝒟(V) is a quasi-convex subset in 𝒞(Σ_g), so {g·𝒟(V): g∈Mod(Σ_g)} forms a collection of uniformly quasi-convex subsets. The electrified disk complex (𝒟(Σ_g), d_𝒟) is obtained from 𝒞(Σ_g) by adding an edge between any two vertices in each Mod(Σ_g)-translate of 𝒟(V).
Endowed with length metric, the identification of vertex sets defines a coarsely Lipschitz map
ϕ_2: (𝒞(Σ_g), d_𝒞)
⟶(𝒟(Σ_g),d_𝒟)
By <cit.>, the electrified disk complex 𝒟(Σ_g) is a hyperbolic space. Furthermore, by <cit.>, a geodesic between two points x,y in 𝒞(Σ_g) is contained in a uniform finite neighborhood of a geodesic between x,y in 𝒟(Σ_g). In other words, ϕ_2 also sends geodesics to un-parametrized quasi-geodesics. Note that ϕ_2 identifies the vertex set of two complexes but 𝒟(Σ_g) contains more edges.
In summary, the composition map Φ:=ϕ_2∘ϕ_1: (𝒯,d_𝒯)⟶ (𝒟(Σ_g),d_𝒟) is
a coarsely Lipschitz map, sending a Teichmüller geodesic to a un-parameterized λ-quasi-geodesic in the electrified disk complex. See more details in <cit.>.
Since f acts as a loxodromic element on both complexes, it makes definite progress in both of them. Let 𝕏={gAx(f): g∈Mod(Σ_g)}, where Ax(f) is the Teichmüller axis.
Let f be a pseudo-Anosov element that acts loxodromically on 𝒟(Σ_g). Then there exist L=L(f) and κ=κ(f)>0 with the following property. Let γ be any geodesic segment with endpoints in Go, and 𝕏(γ) :={X∈𝕏: γ∩ N_C(X)>L}. Then
d_𝒟(γ_-,γ_+)> κ∑_X∈𝕏(γ)ℓ(N_C(X)∩γ)
where ℓ denotes the length of paths in the Teichmüller metric.
Let γ_1 be the image of Ax(f) by ϕ_1. This is a (parametrized) quasi-geodesic in 𝒞(Σ_g) on which f acts by translation. To be precise, there exist λ, c such that for any x, y∈Ax(f), we have
λ^-1 d_𝒞(ϕ_1(x),ϕ_1(y))
-c≤ d_𝒯 (x,y) ≤λ d_𝒞(ϕ_1(x),ϕ_1(y))
+c
Note that the constants λ, c are uniform independent of f.
Similarly, let γ_2 be the image of γ_1 under ϕ_2 which is a (parametrized) quasi-geodesic.
Since 𝕏 has bounded intersection, the images under Φ=ϕ_1∘ϕ_2 of any two distinct Teichmüller axes in 𝕏 have bounded intersection as well, while the image of each Teichmüller axis is a parametrized quasi-geodesic. If L is chosen big enough, the image of N_C(X)∩γ is long enough compared with the bounded overlap. Hence, the desired lower bound follows in a straightforward way.
By <cit.>, if a pseudo-Anosov element f∈Mod(Σ_g) has neither its stable nor unstable laminations in the closure of 𝒟(V), then f acts loxodromically on 𝒟(Σ_g). Let us now fix such a pseudo-Anosov element f.
§.§ Linear growth of Heegaard distance: Proposition <ref>
This subsection is devoted to the proof of Proposition <ref>, the key ingredient of Theorem <ref>.
To that end, by Lemma <ref>, we only need to check those double cosets Hφ H with a representative φ∈𝒲_ϵ,M,f^θ,L for some fixed constants ϵ,M,0<θ≤1/2,L=L(θ), where f is chosen below.
The set of elements φ∈Mod(Σ_g) so that d_𝒟(Φ(o),φΦ(o))>κ d(o,φ o) is exponentially generic.
As the set of barrier-free elements is growth tight by Proposition <ref>.1, we can assume that [o,φ o] contains (r,f)-barriers. Let 𝔹 be the set of all maximal subsegments that are contained in N_r(X_i) for some X_i∈𝕏. Choose a pseudo-Anosov element f satisfying Lemma <ref> large enough so that
d_𝒯(o,fo)>2L+4D+8r
where D is the bounded intersection constant of 𝕏. As each β∈𝔹 is an (ϵ, f)-barrier, we have ℓ(β)>d_𝒯(o,fo)-2r. By the D-bounded intersection, any two distinct segments β_1,β_2 in 𝔹 have an overlap at most D≤ 1/4 min{ℓ(β_1),ℓ(β_2)}. Thus
∑_β∈𝔹ℓ(β) ≤ 4·ℓ(∪_β∈𝔹β)
where ℓ(·) denotes the length in Teichmüller metric.
Let 𝕂_0 be the set of components in the complement [o,φ o]∖∪𝔹 to the union of 𝔹. By construction, each segment α in 𝕂_0 is (r,f)-barrier-free. Let 𝕂={α∈𝕂_0: ℓ(α)>L}. If
∑_α∈𝕂ℓ(α) ≥θ d_𝒯(o,φ o)
then such elements ϕ are contained in 𝒱_ϵ,M,f^θ,L, which is a growth tight subset by Proposition <ref>.2. Up to ignoring this subset, we can assume that ∑_α∈𝕂ℓ(α) < θ d_𝒯(o,φ o), so
∑_β∈𝔹ℓ(β) +∑_α∈𝕂_0 ∖𝕂ℓ(α) ≥ (1-θ) d_𝒯(o,φ o)
We need to analyze the contribution of 𝕂_0∖𝕂 which consists of (r,f)-barrier-free segments α of length at most L. Each such α with length ≤ L must be adjacent to an (ϵ, f)-barrier β∈𝔹 with length ≥ 2L. Thus,
∑_α∈𝕂_0 ∖𝕂ℓ(α) ≤1/2∑_β∈𝔹ℓ(β)
From the above estimates, we obtain that ℓ(∪_β∈𝔹β) ≥ (1-θ)d_𝒯(o,φ o)/6. The conclusion then follows from Lemma <ref>.
By definition, the Hempel distance d_H(V_1∪_φ V_2) of a Heegaard splitting V_1∪_φ V_2 is bounded below by d_𝒟(Φ(o), φ(Φ(o))) in 𝒞_𝒟(Σ_g).
Consequently, the set
ℋ={Hφ H: d_H(V_1∪_φ V_2)≥κ d_𝒯(o,Hφ Ho)}
is exponentially generic in {Hφ H:φ∈Mod(Σ_g)}.
§.§ Completion of the proof of Theorem <ref>
By a work of Hempel <cit.> and Perelman's work of Thurston's geometrization conjecture, the 3-manifold M_φ is hyperbolic if the Heegaard distance d_H(V_1∪_φ V_2)≥ 3. The main result of <cit.> says that, if d_H(V_1∪_φ V_2)>2g, any two Heegaard splittings with genus g must be isotopied, and the Heegaard genus of M_φ is equal to g.
Thus, if M_φ and M_φ' are homeomorphic, then Hφ H=Hφ'H and the Heegaard genus of M_φ is g.
As a result, the map
{Hφ H:φ∈Mod(Σ_g)} ⟶{M_φ :φ∈Mod(Σ_g)}
Hφ H ⟼ M_φ
is injective on the exponentially generic collection ℋ of double cosets, with the image being contained in the following
Γ_g={M_φ :M_φ is hyperbolic with Heegaard genus g}
Hence, we proved that Γ_g is an exponentially generic subset of Δ_g, endowed with geometric complexity.
plain
|
http://arxiv.org/abs/2307.07488v1 | 20230714171931 | Useful Circuit Analogies to Model THz Field Effect Transistors | [
"Adam Gleichman",
"Kindred Griffis",
"Sergey V. Baryshev"
] | physics.app-ph | [
"physics.app-ph",
"physics.plasm-ph"
] |
Department of Electrical and Computer Engineering, Michigan State University, 428 S. Shaw Ln., East
Lansing, MI 48824, USA
Department of Electrical and Computer Engineering, Michigan State University, 428 S. Shaw Ln., East
Lansing, MI 48824, USA
[email protected]
Department of Electrical and Computer Engineering, Michigan State University, 428 S. Shaw Ln., East
Lansing, MI 48824, USA
Department of Chemical Engineering and Materials Science, Michigan State University, 428 S. Shaw Ln., East
Lansing, MI 48824, USA
The electron fluid model in plasmonic field effect transistor (FET) operation is related to the behavior of a radio-frequency (RF) cavity. This new understanding led to finding the relationships between physical device parameters and equivalent circuit components in traditional parallel resistor, inductor, and capacitor (RLC) and transmission models for cavity structures. Verification of these models is performed using PSpice to simulate the frequency dependent voltage output and compare with analytical equations for the drain potential as a function of frequency.
Useful Circuit Analogies to Model THz Field Effect Transistors
Sergey V. Baryshev
August 12, 2023
==============================================================
§ INTRODUCTION
Moore's law predicted that every 2 years that the transistor count would double for a constant amount of area real estate, power consumption, and would cost the same<cit.><cit.>.
This progression of higher availability of transistors in integrated circuits was fostered by discoveries like lithography.
These improvements to the amount of transistors at lower cost led to massive windfalls for growth in automated control systems, data processing, portable communication systems, and cost for broad consumer use <cit.>.
Moore's Law is said to end at some point in the future and projections currently are around 2025. <cit.>
Although in the beginning when Moore first made his prophecy it was initially thought to last for only 20 years.<cit.><cit.>
Many advancements did delay the slowing down of transistor development, but ultimately there are limitations of material properties and the physics of energy transportation. <cit.>
This situation leads to a question of "what is the next technological advancement for computing devices?"
Microwave devices could possibly be the next advancement in the race for better transistors by using the physics of the device to create better performance.
Dyakonov and Shur proposed that field-effect transistors (FETs) can have a new operation mode that utilizes a relationship between radiation at the gate and steady current across the channel that creates Langmuir waves in the channel <cit.>.
They describe an instability under the condition where a short channel FET operates with a large amount of electrons travelling in the channel, which leads to many electron on electron collisions<cit.>.
This system is modelled as a 2D gas which describes instability that exists for electrons that are slower than their own saturation velocity<cit.><cit.>.
High concentration of electrons in the channel leads to many electron on electron collisions and fluid choking will occur at the boundary where the channel meets the drain <cit.><cit.>.
The electrons accelerate across the channel in subsonic flow until their velocity reaches the speed of sound at the boundary of the drain <cit.>.
This constant electron velocity present at the drain means that from the outside observer perspective, the drain has constant DC current <cit.>.
The high concentration of electrons slow their group velocity to be less than their saturation velocity, but the Langmuir waves phase velocity can exceed the electron transit time <cit.>.
Using the phase velocity of Langmuir waves instead of the group velocity of the electrons leads to terahertz level radiation.
This output terahertz frequency at the gate can used in place of the traditional switching speed for FETs.
This instability relationship is dependent on the volume of the channel structure<cit.>.
The DC bias potential controls the depth of the channel, which controls the excitation frequency of the channel to create current across the channel. <cit.><cit.>
Creating models of plasmonic operating devices is difficult.
Previous models<cit.> were designed as exceptional level complexity or were dependent on empirical data <cit.>. In Ref.[liu_compact_2019] a so-called MOSFET segmentation concept was introduced. Because each segment was on a nm to sub-nm length scale the model had to be solved inside EKV (Enz, Krummenacher, Vittoz) model framework, thereby making the segmentation concept dependent on dozens of free parameters (previously optimized only for classical operation of short-channel MOSFETs) and hence making difficult for practical applications. Conceptually simple and easy to interpret THz plasma FET models, if created, could play vital role to allowing the development of architectures and ICs for future microelectronics.
In this paper, we show that plasmonic FET, behaving as a resonating device caused by standing waves that arise from the boundary conditions of the source and drain contacts, as illustrated in Figure <ref>, can be treated as a classical radiofrequency (RF) cavitiy. Small signal parallel RLC circuit model for solving first resonant mode and transmission line model capturing higher order modes are introduced and shown to have excellent agreement with analytical model for Si THz FET operation. Finally, simple symbolic PSpice codes are developed and presented in the Appendices at the end of this paper. All PSpice parameters are physical and can be directly calculated using geometry and material properties of the channel and gate.
§ THEORIES AND EQUATIONS
The signal between the gate and the source is a combination of an AC signal and DC bias potential shown in Figure <ref> <cit.>.
In Figure <ref>, a voltage controlled current source (VCCS) with the gate's resistance controls the gain of the device at the drain. VCCS need a value for the transconductance in PSpice which Gain = g_m · R_g.
To find the transconductance, the start would be the equation for current at the drain which would be
i_D = K(V_AC + V_DC - V_T)^2.
Which is then expressed in terms of the DC potential between the gate and source, V_DC, and the threshold potential, V_T, as
i_D = K(V_DC - V_T)^2 + 2KV_AC[V_DC - V_T ] + KV^2_AC.
The FET amplifies the AC signal due to amplification from the standing wave mechanism in the device <cit.><cit.>. The DC bias only matters with the depth of the depletion channel. Which means that we can neglect the purely DC term in Equation 2 <cit.>.
Transconductance is defined in terms of the gain constant, K, which allows for solving the system in terms of circuit components instead of arbitrary gain, which is given by
g_m = 2K(V_DC - V_T).
The FET in <cit.>, the drain is disconnected from the source which for the simplicity of the case we treat as the source is grounded, demonstrated in Figure <ref>.
For practical purposes, there will be some DC potential at the source usually, but that will also be applied to the gate which means that the potential difference between the gate and source is constant.
Since an open circuit is placed in the drain, the drain current equation in (2) needs to be adjusted because all DC terms are now 0.
So
i_D = KV_AC^2 = g_m/2(V_DC - V_T)V_AC^2
Equation (4) allows for direct evaluation of the potential difference between the drain and the source with multiplying Equation (4) by the resistance of the device channel.
Also using the root mean square for the AC potential from the source
Δ V = V_AC^2/4(V_DC - V_T)g_m R
The VCCS with a parallel resistor matches the gain of the FET detector.
The gain is in terms of the input AC source amplitude, which allows PSpice to correctly model the system in Equation (5).
Assuming the entire channel is gated, the total capacitance in the channel is a series connection between the capacitance due to the gate insulator and the capacitance in the channel.
First, the capacitance from the insulation at the gate is given by <cit.>
C_i = LWϵ_0ϵ_I/t_ox.
The capacitance due to the depletion in channel of the FET is dependent on the depletion distance needs to be solved for <cit.>
d_d = √(2ϵ_0ϵ_S ψ_S/qN_b).
ψ_S is the surface potential applied to the channel defined as <cit.>
ψ_S = 2V_THlnN_b/n_i.
Where ϵ_S is the relative permittivity of the substrate, q is the charge of the electron (in Coulombs, ), and N_b is the concentration of donor electrons in the silicon channel. And n_i is the intrinsic concentration of the channel. Both these concentration values are defined in per volume units usually given in ^-3. Equation 8 and 13 need the thermal potential of the transistor, V_TH, which is solved by
V_TH = kT/q.
Where T is temperature in Kelvin, k is the Boltzman constant, and q will be the charge of an electron.
The depletion channel capacitance is therefore <cit.>
C_d = LWϵ_0ϵ_S/d_d.
And the total capacitance, C_tot, for the channel is solved by taking the parallel combination of the capacitance of the depletion region, C_d, with the capacitance of the insulator from the gate, C_i,<cit.>
C_tot = C_iC_d/C_i + C_d.
Calculation of the Drude inductance is performed by solving for the electron sheet density in terms of the ideality constant of the FET, η <cit.>
η = 1 + C_d / C_i.
The expression of the initial sheet electron is given by <cit.>
n_0 = η V_TH C_OX/2q.
Where the capacitance of the oxide per unit area is C_OX = ϵ_I ϵ_0/t_ox.
The concentration of electrons in the channel per unit area once the thermal effects are accounted for is <cit.>
n_s = n_0 ln [1 + 0.5exp(V_DC - V_T/η V_TH)].
This concentration of electrons per unit area in the channel will be used to solve for the the Drude inductance, L_drude, because the concentration effects the amount of electron on electron collisions that are present in the system. The Drude inductance equation is as follows <cit.>
L_drude = L · m_eff· m_0/q^2α^2 n_s W.
From here, the resistance is calculated in order is calculated in order to model the leakage between the gate and the drain the quality factor equation as a relationship between the fundamental mode, the total capacitance of the channel, and the quality factor R=Q/ω_0 C_total. This is important to make sure that the quality factor of the new models will still match results in <cit.> because the bandwidth of the system is dependent on the quality factor from Q=BW^-1.
The transconductance equation for the model <cit.>,
g_m = Wμ_n/LC_OX(V_DC - V_T) ,
§ MODEL VALIDATION
To validate agreement between the lumped and transmission line models match and the fluid plasmonic model, we considered a silicon channel (ϵ_S=11.9 from <cit.>) with a 3D concentration of N_b=10 × 10^17 ^-3 and an intrinsic concentration of silicon of n_i=10^10^-3 from <cit.> with a silicon oxide insulator (ϵ_I=3.9 from <cit.>) that has a thickness of t_ox=4.315.
The mobility of the substrate is μ = 0.1 ^2/· and the effective mass used is 0.19 (from <cit.>).
Dimensions of the device are a length of L=25 and the width of W=5 μ.
A DC potential is applied between the gate and source of V_DC=0.6 with a thermal potential of V_TH=0.28 and the applied AC signal amplitude is V_AC=100.
All calculations were made with an assumption of room temperature operation at T=300.
The RLC solution to these system was a transconductance of g_m=12.7, L_drude=8.352, C_tot=9.864659× 10^-17, and a resistance of 1800 Ω.
The corresponding PSpice file for the RLC simulation is included in appendix 1.
The parallel RLC model is simulated in PSpice, which exported the data of the drain-source potential (Δ V) as a function of frequency (with respect to V_AC) into a comma-separated values format for MATLAB to import.
MATLAB is used to compare the PSpice models with the fluid model as shown in Figures <ref>, <ref>, and <ref>.
The parallel RLC model is successful in replicating the fundamental resonant mode, but fails to create the higher order modes from the fluid model.
Lack of the higher order modes present motivates the use of a lossless transmission line component in PSpice, Figure <ref> is the circuit model used (file is in Appendix 1), to generate those higher order modes.
A lossless half wave transmission line is shown to replicate the fundamental modes with the higher order modes <cit.>.
Usually, an open circuited half wave transmission line is equivalent to a passive parallel RLC model, however this transmission line model requires the same resistor connected as a load so that the gain from the transconductance is still accounted for in the model <cit.>.
Name Symbol Value Units
3D Concentration of Donors N_b 10×10^17 ^-3
Intrinsic Concentration of Si n_i 1×10^10 ^-3
Channel Length L 25
Channel Width W 5 μ
Insulator Thickness t_ox 4.315
Mobility of the Channel μ 0.1 ^2/·
Threshold Potential V_T 0.28
DC Bias Potential V_DC 0.6
Effective Mass of Si m_eff 0.19 –
Relative Permittivity of Si ϵ_S 11.9
Relative Permittivity of SiO2 ϵ_I 3.9
Temperature T 300 K
In Figure <ref>, the result of the transmission line model (file is in Appendix 2) has good agreement with the fundamental model by having a consistent quality factor, but with the higher order modes that are not present in the parallel RLC model in Figures <ref> and <ref>. This transmission line model increasingly shifts the center frequency of higher order modes, this is also dependent on the transconductance of the VCCS.
§ CONCLUSION
The results of this paper show that cavity-inspired circuit models demonstrate promising results for small signal/ ultra high frequency design. Our results illustrate that the cavity behavior in plasmonic FET operation leads to simple and effective circuit models for small signal evaluations with only five parameters solved from physical dimensions.
§ APPENDIX 1: PARALLEL RLC .CIR FILE
PARALLEL RLC
* V_in = V^2_AC / (4 * (V_DC - V_T) ), V_AC = 0.01 V, V_DC - V_T = 0.32 V
1 0 7.8125e-5
3 0 1 0 12.7
3 0 8.352-12
3 0 1800
3 0 9.86465905084-17
5000 1 30
author's note: Some of the gain scaling was multiplied to the AC input potential instead of the transconductance term because solving a number and adjusting the input voltage was easier in PSPICE than trying to put transconductance as a multiple of the AC voltage divided by the DC potential. This is also done in the transmission line file. The corresponding figure to this model layout is Figure <ref>.
§ APPENDIX 2: TRANSMISSION LINE .CIR FILE
TRANSMISSION LINE MODEL
* V_in = V^2_AC / (4 * (V_DC - V_T) ), V_AC = 0.01 V, V_DC - V_T = 0.32 V
1 0 7.8125e-5
2 0 1 0 0.012923082392042
This is the transmission line element. Impedance and center frequency are solved from lumped RLC components.
2 0 4 0 = 290.974 = 5.544774
4 0 1800
5000 1 30
author's note: Some of the gain scaling was multiplied to the AC input potential instead of the transconductance term because solving a number and adjusting the input voltage was easier in PSPICE than trying to put transconductance as a multiple of the AC voltage divided by the DC potential. This is also done in the parallel RLC line file. The corresponding figure to this model layout is Figure <ref>.
IEEEtran
|
http://arxiv.org/abs/2307.04526v2 | 20230710124959 | Self Expanding Neural Networks | [
"Rupert Mitchell",
"Martin Mundt",
"Kristian Kersting"
] | cs.LG | [
"cs.LG",
"I.2.6"
] |
Automatic Debiased Machine Learning for Covariate Shifts
Michael Newey and Whitney K Newey Research was sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. This research was supported by NSF Grant 1757140
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The results of
training a neural network are heavily dependent on the architecture chosen;
and even a modification of only the size of the network,
however small,
typically involves restarting
the training process.
In contrast to this,
we begin training with a small architecture,
only increase its capacity as necessary
for the problem,
and avoid
interfering with
previous optimization
while doing so.
We thereby introduce
a natural gradient based approach
which intuitively expands both the width and depth of a neural network
when this is likely to substantially reduce the hypothetical converged training loss.
We prove an upper bound on the “rate” at which neurons are added,
and a computationally cheap lower bound on the expansion score.
We illustrate the benefits of such Self-Expanding Neural Networks in both classification and regression problems,
including those where the appropriate architecture size is substantially uncertain a priori.
§ INTRODUCTION
Correctly tailoring a model's capacity to an arbitrary task is extremely challenging, especially when the latter is not yet well studied.
This challenge can be side stepped by choosing an architecture which is so large that a poor solution is nevertheless unlikely to occur <cit.>, e.g. due to the double-descent phenomenon. However, since it is hard to predict what size would be large enough this will often in practice entail using a massively overparameterized network <cit.> <cit.> <cit.>.
Surely it is possible to detect that the existing capacity of the network is insufficient and add more neurons when and where they are needed?
In fact, biological neural networks are grown by adding new neurons to the existing network through the process of neurogenesis.
The popular review <cit.> discusses the relatively recent discovery that this process is still active in the adult mammalian brain
<cit.>,
and <cit.> <cit.> identify it as a key ability underpinning lifelong learning.
Thus inspired,
we propose an analogous process for adding both neurons and layers to an artificial neural network during training,
based on a local notion of “sufficient capacity” derived from first principles in close relation to the natural gradient <cit.> <cit.>.
Any method for artificial neurogenesis
must answer three questions to avoid the problem of locally insufficient capacity <cit.>.
It must determine when the current capacity is insufficient and that neuron(s) must therefore be added.
It must identify where these neurons should be introduced.
Finally, it must choose what initialization is appropriate for these neurons.
These questions, if they are addressed at all in the literature, are normally addressed piecemeal or in ad-hoc ways. For example, very few methods address the question of what <cit.> <cit.>.
is answered either by assuming predetermined schedules <cit.> <cit.>,
or by waiting for the training loss to converge <cit.> <cit.>,
neither of which are informative about where.
“Whenever you parry, hit, spring, ..., you must cut the enemy in the same movement.”
[
Miyamoto Musashi, The Book of Five Rings (circa 1645)
]
Our metaphorical enemy is not a loss which is momentarily poor, or even one which is converging to a poor value:
it is a deficiency in our parameterization such that the optimizer cannot make progress.
We argue that by inspecting the degrees of freedom of the optimizer in function space,
one may not only strike faster in answer to when, but answer where and what in the same stroke.
From a mathematical perspective, these degrees of freedom available to the optimizer
are given by the image of the parameter space under the Jacobian,
and the derivative with respect to the loss in function space will not in general lie in this subspace.
It is however possible to project this derivative onto that subspace,
and the natural gradient, ^-1,
is exactly the change in parameters which changes the function according to this projection.
In order to measure the size of that projection for a given parameterization,
we introduce the natural expansion score η = ^T ^-1.
Specifically, the capacity of a neural network is locally insufficient when this score is small for the current parameterization. We therefore add neurons when this substantially increases η, where they will maximally increase η, and choose what initialization to use for the new parameters according to how it increases η. To summarize, our contributions are:
* We introduce the natural expansion score which measures the increase in rate of loss reduction under natural gradient descent when width or depth is added to a neural network.
* We show how such additions may be made during training
without altering the function represented by the network.
Our neurogenesis inspired Self-Expanding Neural Networks (SENN) thus avoid interfering with previous optimization or requiring restarts of training.
* We prove that the number of neurons added simultaneously in SENN is bounded. We further introduce a computationally efficient approximation as a provable lower bound to increases in natural expansion score resulting from additions.
* We demonstrate SENN's effectiveness for regression and classification.
In the remainder of this paper, we proceed as follows:
In section <ref> we summarize existing growth methods,
in section <ref> we then describe SENN,
and in section <ref> we illustrate its operation in practice.
§ RELATED METHODS FOR GROWING NEURAL NETWORKS
The problem of adding nodes to neural networks during training has been under consideration for over 30 years (e.g. Dynamic Node Creation (DyNC) <cit.>),
but remains substantially unsolved.
There does not seem to exist a unified answer to , and ,
as we summarize in table <ref>.
Most methods cannot add depth and sideline at least one of these questions.
Inspired by neurogenesis like SENN, <cit.> examine the case of representational learning with stacked autoencoders,
where they exploit local reconstruction error to determine and to add neurons.
Due to their more general setting,
DyNC,
Progressive NNs (PrNNs) <cit.> and Dynamically Expandable NNs (DENNs) <cit.>
use simple training loss convergence or even task boundaries to answer , but must then fall back on ad-hoc preset decisions for .
(However, DENNs use subsequent pruning to mitigate the excess capacity introduced by the preset.)
All four methods freeze old neurons or continue training from their present values,
but randomly initialize new neurons in answer to .
While ActiveNAS <cit.> can add both width and depth,
it does so by completely restarting training with a fresh initialization of the whole network after every modification.
It then waits for convergence,
and uses preset answers to similar to the previous methods.
The final cluster of three methods all aim to improve on random initialization as an answer to what.
Splitting Steepest Descent (SSD) <cit.> and Firefly <cit.> make small changes to the existing function and answer by optimizing the consequent loss reduction.
The former answers by waiting for convergence and examining the loss, whereas the latter simply adds more capacity every N epochs.
Gradmax <cit.> is the closest to SENN in spirit,
but is based on vanilla rather than natural gradient descent. More importantly, potential extensions of the method to the and questions are mentioned briefly and their investigation deferred to future work.
All three of these latter methods are only able to avoid redundancy of added neurons with existing neurons to the extent that the network is already converged. Of these three, only GradMax completely avoids changing the overall function.
In contrast, SENN provides a monolithic answer to all three questions via the natural expansion score.
§ SELF-EXPANDING NEURAL NETWORKS
To provide a cohesive answer to , , with Self-Expanding Neural Networks,
we start with the definition of the natural expansion score as the foundation:
The natural expansion score η = ^T ^-1 is given by the inner product of the natural gradient ^-1 with the gradient .
With this definition
we will describe we add capacity without interfering with the existing optimized parameters
in section <ref>.
We then in section <ref>
give an intuitive account of what our score η measures,
and why we use this to decide to add capacity.
Section <ref> gives a more mathematically precise account of the meaning of η,
and what this says about initializations should be used for new capacity.
Section <ref> extends the argument of <ref> to deciding new capacity should be added and whether it should be depth or width,
allowing us to put the ingredients of SENN together and summarize this combination.
Finally, sections <ref> and <ref> cover the practical questions of convergence guarantees and computational efficiency respectively.
add conditions for Wout equals and sigma p equals
§.§ How to add: expanding without changing the overall function
In order to explain how to add without changing the overall function,
we will consider the illustration in figure <ref>.
This shows a perceptron with two hidden layers, each with three neurons.
The number of neurons a hidden layer may be increased by introducing a new copy of the activation function σ_p
and connecting it to the neurons of the preceding layer with some linear transform _p.
As shown on the left of the figure, we connect the new neuron to the subsequent layer (in this case the output )
with a linear transform initialized to zero.
In doing so, we guarantee that we will not perturb the function specified by the existing parameters.
Although _p will initially receive zeroed gradients since the output transform is zero,
this latter transform will immediately receive non-zero gradients and thereby become non-zero.
The new neuron may thus be used in future optimization.
In addition to width expansion, we now consider inserting an entirely new layer,
as shown on the right of figure <ref>.
In essence, a particular linear transform, _2 in the figure, is replaced with a single layer perceptron.
To this end, we assume our nonlinearity σ_p to be parameterised, and there to exist a choice of those parameters such that σ_p = is the identity.
If we require the initial linear transform _p of the inserted perceptron to be invertible (but otherwise arbitrary),
then we may choose the output linear transform of the perceptron to be the matrix product _2 _p.
With these choices made, the inserted perceptron is equivalent to the linear transform _2 it replaces,
and the overall parameterized function once again remains unchanged.
We thus have the first ingredient of SENN:
SENN Ingredient 1: How to add more capacity without changing the overall function.
We add proposed neurons p to layer i
by concatenation along the the ith hidden dimension
(0⊎_i+1) ∘ (σ_p ⊎σ_i) ∘ (_p ⊎_i) = _i+1∘σ_i ∘_i,
and initialize the output weights of p to zero.
We insert a new layer q by replacing some linear transform _i
with the composition (_i _q) ∘ (σ_q = ) ∘_q,
where _q is invertible and σ_q is initialized to be the identity.
We must therefore choose a suitable parameterized activation function.
Rational activation functions satisfy the above conditions and were shown to obtain good real world performance <cit.>.
We use the simplified parameterization
σ_ (x) = α x + (β + γ x)/(1+x^2),
where = {α, β, γ} are the three parameters of σ,
and setting = { 1, 0, 0 } results in the identity function, as required.
Since this parameter count is small, we do not share the activation function weights within our layers.
§.§ When to add: deciding whether more capacity is useful
Having decided how to add,
perhaps the most natural way to evaluate the utility of making
some change to the parameterization is to ask what immediate effect this has on the total loss.
However, we cannot do this as we have assumed the overall function to remain unaltered.
We must therefore consider additional information such as the gradients of the function.
Specifically, one can favor adding neurons which maximally increase the euclidean norm of the gradients ||||_2.
As found in <cit.> this norm functions well for selecting which neurons to add when the network is close to convergence since
it is a direct measure of the rate at which gradient descent will decrease the loss.
Unfortunately, comparing the gradient norms ||||_2^2 and ||'||_2^2 for the current parameterization and some new expanded parameterization '
is insufficient to determine whether or not more capacity is needed in the first place.
This is primarily because it does not account for redundancy in the parameterization:
if there is some neuron a such that the gradients of the linear weights in the next layer “listening” to it have some large norm ||_a||_2,
then we could introduce an exact copy of this neuron a' for which the corresponding norm would also be ||_a'||_2 = ||_a||_2.
Since the squared euclidean norm is additive across parameters,
we could unboundedly increase ||||_2^2 just by adding very many copies of this one neuron a.
[
More generally, the same problem would occur when considering a new neuron c whose activations were some linear combination of those of some existing neurons a and b.
]
In SENN, we avoid this problem with the following simple notion of redundancy.
We are using our parameters to express a point in function space.
At some point in optimization we are therefore also using them to express a small change in function space.
There is some direction that our optimizer “wants” to move in (i.e. the direction in function space which most quickly reduces the loss).
We can define new parameters as being useful in a way which is non-redundant with the old parameters to the extent that they allow the optimizer to better express the direction in function space it “wants” to move in.
Our natural expansion score η = ^T ^-1 captures this combined sense of usefulness and non-redundancy in a way which will be made more mathematically precise in the next section.
This description of its function is sufficient, however, to justify our answer to when:
SENN Ingredient 2: When to add more capacity.
A new neuron or layer will be helpful and non-redundant if it provides a fractional increase in η = ^T greater than some threshold τ.
we find a potential new neuron or layer for which this is true, we add it.
We defer specific choices for τ to section <ref>,
at which point we may draw on the derivation of η.
§.§ What to add: determining the initial value of new neurons
We are assuming fisher information metric on output is euclidean here. Maybe mention in footnote?
The reader may at this point be expecting us to tackle the question of additional capacity is most useful,
but this would put the cart before the horse.
Additional capacity is useful to the extent that it can be initialized in a way which is useful,
which we now consider.
To simplify mathematical notation in this section,
we consider the output to be concatenated over the entire training dataset.
While the gradient of the loss with respect to the output _ tells us how the loss changes for arbitrary changes in ,
the only changes in we can actually achieve with some parameterization Θ are given by Jacobian product for some small parameter change ∈Θ.
Let _Θ be the orthogonal projection onto this space of directions in output space.
The vector _Θ_ is then the portion of _ which lies in the space of achievable output changes,
and its squared norm ||_Θ_||_2^2 is a scalar measure of how much of _ this portion is.
The vector _Θ_ is the image under the Jacobian
of some tangent vector in the parameter space.
By the definition of orthogonal projection, minimizes || - _||_2,
but if there are redundant directions in Θ then there may exist multiple such .
There is however a unique _* which minimizes ||||_2 among those which minimise || - _||_2.
The Moore-Penrose inverse, ^+, of is the unique matrix such that _* = ^+ _ for arbitrary _.
However, is a map from parameter space to total output space, which depends on dataset size N.
This dependency can be avoided by working with maps from the parameter space to itself,
such as the following
average over the dataset = 1/N^T,
known as the Fisher information matrix.
The natural gradient is then given by ,
where = 1/N^T _ is the gradient of the loss with respect to the parameters averaged over the training set,
and existence of = + ϵ is guaranteed by the addition of a small multiple ϵ of the identity.
In the limit of small ϵ this is exactly our _*.[
In fact, an alternative definition of the Moore-Penrose inverse is:
^+ := lim_ϵ→ 0(^T + ϵ)^-1^T
]
We are now able to rewrite the squared norm ||_Θ_||_2^2 in the familiar form of definition <ref>:
||_Θ_||_2^2 =
_*^T ^T _* =
^T ^-1^T ^-1 =
N ^T ^-1 =
N η .
Here, the factor of the dataset size N appears because the average gradient and our η are normalized according to the training set size.
With this formula, we have now derived η from first principles
and may use it to choose between specific initializations, yielding our third SENN ingredient:
SENN Ingredient 3: What Initialization to Use.
If ' ∈Θ' is an initialization of an expanded parameterization Θ' such that the overall function remains unchanged (see section <ref>), then the best such initialization _*' is given by
_' (η').
When we add new neurons or layers, we choose initialization to use by this method.
§.§ Where to add: completing the algorithm
Much as the euclidean norm ||||_2 measures the rate of loss reduction according to vanilla gradient descent,
our η measures the rate of loss reduction according to natural gradient descent.
This gives a uniform way of comparing the effect of new capacity no matter where it is added in the network or whether it takes the form of new neurons in an existing layer or a new layer.
In particular, one may compare the η values of the best initializations (see section <ref>) for each such variety of addition.
[
In general one can also adjust for the “size” of each addition in some relevant sense.
We found it sufficient to just penalize adding entire new layers versus single new neurons by some constant factor.
]
SENN Ingredient 4: Where to Add.
A choice of whether to add width or depth, and in the network the new neuron/layer will be added,
specifies a particular extension of the current parameter space Θ'.
We make those choices which correspond to the extension Θ_*' = _Θ'_' (η') for which the best initialization is possible.
Our newfound knowledge of η as a rate of loss reduction in hand,
we return to the question of specifying the expansion threshold τ,
which we deferred from section <ref> in our previous answer to when.
An increase from the current natural expansion score η_c to a new score η_p due to some proposed expansion p corresponds to an increase in the rate of loss reduction by natural gradient descent.
We define this increase to be “sufficient” when it corresponds to a relative increase η_p / η_c > τ
in loss reduction rate greater than the expansion threshold τ.
For example, with the intuitive choice τ=2,
each addition must at least double the rate of loss reduction.
Following the well known intuition that a network does not practically converge without setting the learning rate to zero,
it is generally considered to have converged once changes in loss become sufficiently small.
In analogy to monitoring plateaus in loss,
we further require the increase in loss reduction resulting from new capacity to surpass an absolute stopping criterion α. While we answer when, and cohesively with η during training,
we thus concur with all prior works on terminating training.
Overall, we may now summarize all ingredients of SENN on the basis of the natural expansion score:
SENN: Summary.
When we add width or depth we do so without changing the overall function.
We add new capacity when this produces a relative increase in score η_p / η_c > τ
larger than the expansion threshold τ.
We add new capacity where it would most increase η,
and choose what initialization to use in order to maximize η.
We ensure the addition process terminates by additionally comparing each Δη
to the absolute stopping criterion α,
and not adding capacity when η_p - η_c ≤α.
§.§ Bounds on convergence of expansion
Consider repeatedly running our addition algorithm for a network with initial expansion score η_0.
The expansion threshold τ guarantees that η_i > τη_i-1 after the i-th addition.
Since η = ^T ^-1
is the squared length of the projected gradient in output space ||P_Θ_||_2,
it is non-negative and bounded above by η≤λ = ||_||_2^2.
Since η_i grows exponentially with i
and is bounded above by λ
the maximum number of sequential additions i < N_s
increases logarithmically with λ.
Specifically, N_s < (lnλ - lnη_0)/lnτ.
This bound becomes large when η_0 is small,
but we also know that η_1 > α from the stopping criterion α.
The maximum number of additions N_s from repeatedly running the expansion algorithm is bounded:
N_s < 1 + (lnλ - lnα)/lnτ.
(Proof in supplementary material.)
For example, if τ = 2 and α / λ > 10^-3 then N_s < 1 + 3ln10/ln2 < 11.
Note that exponentially large ratios between α and λ produce only linearly large bounds on N_s.
We now consider the number of additions N_T made over the course of training with natural gradient descent.
Intuitively, λ is the total possible loss reduction and α is the minimum reduction which justifies expanding the network.
If every time we expand the network it only achieves this minimum reduction then we must expand a total of roughly N_T ≈λ / α times.
If the loss function has constant curvature equal to the fisher ,
then the total loss reduction possible with the current parameters is given by 1/2η
and we have N_T < λ / α exactly.
More generally,
we expect that when is an underestimate of the true curvature,
η will overestimate the usefulness of new neurons causing N_T to be larger,
and vice versa for an overestimate.
See supplementary for more in depth discussion.
§.§ Efficiently computing a lower bound on score increase
Recall that the natural expansion score η is given by the inner product of the gradient with the natural gradient ^-1.
Since working with the natural gradient can be challenging due to the matrix inverse ^-1,
we will make use of established approximation techniques.
Specifically, when we need the natural gradient for the whole network we will use the iterative conjugate gradient method, as suggested for the Hessian in <cit.>,
performing Fisher-vector multiplication cheaply via auto-differentiation.
check if there is a better citation which uses F not H
When we require the inverse Fisher _l^-1 for the linear transform in some layer l considered in isolation,
we approximate _l by the Kronecker product _l ≈_l = _l ⊗_l,
choose S or G
where _l is the second moment of the activations at the input of the linear transform,
and _l is given by the second moment of some distribution of gradients with respect to the output of the linear transform.
The relevant gradient distribution is determined by the choice of metric on the output space implicit in the exact definition of one is using,
which for us is the euclidean metric.
The advantage of this Kronecker factorization is that _l may be inverted by inverting _l and _l separately:
_l^-1 = _l^-1⊗_l^-1,
which is much cheaper.
If ∂ is the gradient with respect to the weights as a matrix,
then the natural gradient is given by ^-1∂^-1<cit.>.
The natural expansion score η is given by the inner product of the gradient with the natural gradient as vectors,
which in this matrix form becomes the elementwise inner product
η = ∑_i,j∂_ij (^-1∂^-1)_ij,
which can also be expressed as a trace: η = [∂^T ^-1∂^-1].
The trace formula for η is reminiscent of the definition of
the pearson correlation coefficient r^2 = xy^2 / (xxyy).
The gradient for is given by the expectation ∂ = ^T,
where is the input activation vector,
is the derivative of the loss with respect to the outputs,
and the expectation is over the dataset.
Let the residual gradient
_r = - ^T^-1
insert/handle layer indices here
be the part of the gradient not predicted by the current activations .
Then if _p is the activation vector of a set of proposed neurons,
and _p is their second moment,
then the “correlation coefficient” of the new activations with the residual gradients is a lower bound Δη' on the improvement Δη in natural expansion score
(proof in appendix via block LDU decomposition of joint activation covariance):
Δη' := [_p^-1_p _r^T_l^-1_r _p^T]
is a lower bound Δη' ≤Δη = η_p - η_c on the improvement in natural expansion score due to some proposed addition of neurons p to a layer l.
Intuitively, Δη' is the fraction of variance in residual gradients “explained” by the output of our new neuron(s).
This result holds for adding an arbitrary number of new neurons to an existing layer.
If a layer was inserted while retaining residual connections around it,
then the same result would hold if we treated the activations of the new layer as “new neurons” in the old layer to calculate Δη'.
Because our activation function can represent the identity,
we will automatically add these connections if in fact they are necessary,
so we in fact use this same method for evaluating our actual layer insertions.
The bound Δη' can be computed for an arbitrary proposal p of additional neurons
using only those intermediate activations and gradients which it would be necessary to cache
in order to calculate the gradient and (Kronecker factored approximate) natural gradient
via backpropagation.
Therefore, if we have an outer optimizer which computes and ,
then we may optimize arbitrarily many proposals p for arbitrarily many steps with an inner optimizer
without incurring any additional costs related to the evaluation of the existing network.
The costs of this inner optimizer instead scale with the size of the (very small) networks whose addition to the existing network is being considered.
§ EXPERIMENTS
We now apply Self-Expanding Neural Networks to regression and classification, to illustrate the behavior of the natural expansion score and demonstrate SENN's efficacy.
§.§ Width Addition in Least-Squares Regression
We first show that the evolution over training of the possible improvements Δη' in natural expansion score due to potential width expansions is meaningful.
In order to do so we consider the application of a single layer SENN to a one dimensional least squares regression task as shown in figure <ref>,
i.e. SENN with depth addition deliberately disabled.
The reason to have only one hidden layer is that this is effectively least squares regression with basis functions given by the neurons of that layer.
We can therefore plot the normalized score increase Δη' / η_c of the best neuron for each basis function location and length scale.
Where Δη' / η_c > 1 there exists an acceptable proposal.
Accepted/rejected proposed neurons are shown on this landscape in red/black at key points in training.
We see in the leftmost figure that the best such proposal is accepted because it achieves a large improvement in η,
and it corresponds to a basis function location close to datapoints with visibly large prediction error which we have been unable to reduce using the existing neurons.
The next figure to the right shows the same landscape after the new neuron is introduced,
and it can be seen that the Δη' / η_c values for neurons with similar locations to it have been dramatically reduced
since they would be redundant.
The second figure from the right shows the result of optimizing the new expanded parameters until the point at which the next neuron would be added.
It can be seen that the prediction errors in the region of the previously introduced neuron are now practically invisible,
and that the next neuron is to be introduced in a different region in which errors remain.
The rightmost figure shows the function approximation at the conclusion of training,
and it can be seen that the prediction errors are negligible and proposals with large relative increase in η are not to be found in the region considered.
The reader may note that there are some possible new neurons with small length scales which would surpass the expansion threshold which we do not find;
we could deliberately try optimizing initializations at this lengthscale to find these,
but this would likely result in overfitting.
Overall, SENN thus identifies regions of locally insufficient capacity in our parameterization,
targets these regions precisely with new added neurons,
and uses this selectively added capacity to achieve a good final fit.
§.§ Layer Addition in Classification
consider briefly mentioning width addition.
We now highlight SENN's depth expansion in the context of
classification.
Specifically, we consider two-dimensional inputs from the half-moons dataset <cit.>.
In figure <ref> we plot Δη' / η_c for the best layer addition proposals as a function of overall optimizer steps.
Visualizations of the learned decision boundary at initialization and just before layer additions are shown.
We can observe that Δη' / η_c increases approximately monotonically during three phases,
punctuated by large drops when layers are added.
In the initial phase the network has zero hidden layers (i.e. is linear),
and the simplicity of the decision boundary at the end of this phase reflects this.
Since the datapoints are not linearly separable,
the large Δη' / η_c value correctly indicates that the introduction of a hidden layer is necessary in order to further reduce loss.
The visible increase in decision boundary complexity and accuracy over the course of the second phase confirms this.
The beginning of the third phase marks the introduction of a second hidden layer and we wait until Δη' / η_c rises again,
indicating an exhaustion of this new capacity, before reexamining the decision boundary.
The increase in boundary complexity is less visible this time, but close inspection reveals that the boundary has become narrower and more rounded.
Conclusively, we have intentionally constructed a scenario where depth addition is necessary for a good fit to lie in the space of solutions,
and seen that SENN inserts new layers when this is necessary for global expressivity.
§.§ Dynamic Selection of Appropriate Architecture Size in Image Classification
Finally, we examine the ability of self-expanding neural networks to choose an appropriate size when classifying MNIST <cit.> images.
The leftmost plots of figure <ref> show SENN's total hidden size and validation accuracy during training on the full dataset as a function of total batches seen.
This use of mini-batching is not strictly necessary for MNIST but we use it to better reflect the realities of training modern neural networks.
Our SENN is initialized with a single hidden layer of size 10, and promptly adds a second hidden layer, also of size 10.
All five seeds considered then proceed to consistently add width to these layers at a moderate rate until a total hidden size of around 40 is reached,
at which point far fewer productive extensions of the network are found and addition slows dramatically.
It can be seen that this results in respectable validation performance (>97%) by the end of training with very modest hidden neuron counts (50-60).
It is of particular note that our method produces strong anytime performance:
we are able to continually expand size, and even insert layers, during training without any attendant drops in validation accuracy.
Indeed, our method exhibits mostly monotonic improvement up to stochasticity from batching,
a property not shared by methods which rely on reinitializing a new network, e.g. <cit.>. This makes SENN a perfect fit to prospective applications in e.g. active or continual learning, in the spirit of our original neurogenesis inspiration.
Having verified sensible performance of SENN on the full MNIST dataset,
we now examine the way in which they adapt their final converged size to the amount of information in the dataset.
To this end, we take class-balanced subsets of MNIST of varying sizes and train SENNs to convergence.
To maximize clarity in our examination of this relationship, we restrict the SENN to width addition.
The converged hidden sizes are shown together with the standard error across five seeds in the rightmost plots of figure <ref>.
The first of these shows log width against linear subset size for ease of comparison to the leftmost panel. It can be seen that the final width tails off rapidly with subset size.
The rightmost plot shows instead linear width against logarithmic subset size,
in which we can now distinguish three regimes.
For the smallest subsets, the initial hidden size of 10 is sufficient.
For subsets between 10% and 60% of the standard training set,
the final hidden size increases logarithmically,
but past that point further increases in subset size do not similarly increase the final network size.
We posit that this is due to substantial redundancy within the MNIST training set,
leaving further capacity growth unnecessary. Thus, SENN does not only provide desirable any time performance, but also tailors its size suitably to the available data.
§ CONCLUSION
We have introduced the natural expansion score η and shown how it may be used to cohesively answer the three key questions , and of growing neural networks.
We have demonstrated its ability to capture redundancy of new neurons with old and thereby make sensible expansion decisions
across time and tasks.
While we have focused on providing a thorough mathematical grounding of the natural expansion score in this work,
we acknowledge that the multilayer perceptrons on which it was demonstrated
differ in scale and complexity from many of the architectures in active use for deep learning in the modern big data regime.
Dually, however, prospects for further development are promising, as our theoretical results regarding η apply for arbitrary expansions of parameterized models,
and our method of expansion would extend naturally to, for example, convolutional neural networks or normalizing flows
where layers may be initialized invertibly.
This work was supported by the project “safeFBDC - Financial Big Data Cluster” (FKZ: 01MK21002K), funded by the German Federal Ministry for Economics Affairs and Energy as part of the GAIA-x initiative, and the Hessian research priority programme LOEWE within the project “WhiteBox”. It benefited from the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK; projects “The Third Wave of AI” and “The Adaptive Mind”).
named
§ PROOFS
§.§ Theorem 1: Bounded rate of addition
In this section we prove theorem 1 of the main body.
We will assume ≻ 0 to be positive definite, with the following straightforward consequence
The natural expansion score is non-negative η = ^T ≥ 0.
If ≻ 0, then ≻ 0,
and ^T ≥ 0 for all .
Considering the effect of the expansion threshold τ we obtain the following bound:
Let η have initial value η_0 and be bounded above by λ > η.
If threshold τ guarantees that η_i > τη_i-1 for the i-th addition,
then the maximum number of successive additions N_s is bounded by
N_s < lnλ - lnη_0/lnτ.
Due to the threshold τ, η grows at least exponentially: η_i > τ^i η_0.
But η is bounded: λ≥η_i > τ^i η_0.
Since ln is monotonic, we may take logarithms:
lnλ > i lnτ + lnη_0.
and rearrange to get i < lnλ - lnη_0/lnτ for all additions i.
This true for every i-th addition which is accepted, and so in particular also true for the last N_s-th addition.
Considering also the effect of the stopping criterion α we obtain theorem 1:
If the stopping criterion α guarantees that η_i > η_i-1,
then the maximum number of successive additions N_s is either 0, or bounded by
N_s < 1 + lnλ - lnα/lnτ.
Either N_s = 0, or there is a first addition with natural expansion score η_1 for which
η_1 - η_0 > α.
From lemma <ref> we then have η_1 > α.
We may then substitute α into lemma <ref> in place of η_0 to obtain
a bound on further additions, yielding
N_s < 1 + lnλ - lnα/lnτ.
This theorem is important because it guarantees that SENN will add a limited number of neurons or layers before continuing training.
Intuitively, this is because it rapidly becomes the case that any new neuron is either not relevant to rapidly decreasing the loss, or is redundant with some already extant neuron.
§.§ Theorem 2: Lower bound on increase in natural expansion score
We now prove theorem 2 of the main body, concerning a lower bound on the increase in natural expansion score η due to the addition of new proposed neuron(s) to a layer.
Let the joint activations = [ _c; _p ] of the current and proposed neurons have second moment
^T = = [ _c _cp; _pc _p ].
We will assume the Fisher matrix for the layer to which neurons are to be added to factorize as = ⊗, where ≻ 0 is positive definite.
We first derive a convenient form of a known result discussed in, for example, <cit.>,
related to the joint covariance of multivariate Gaussian distributions.
Let _p = _p - _pc_c _cp be the Schur complement of _c in .
Let also = [ _c; _p ] be an arbitrary vector,
and be the linear operator defined by = _p - _pc_c _c,
i.e. the residual part of _p not predicted by _c.
Then, ^T = _c^T _c _c +
()^T _p.
The following may be obtained by performing a block LDU decomposition:
=
[ _c _cp; _pc _p ] =
[ _c 0; _pc_c _p ][ _c 0; 0 _p ][ _c _c _cp; 0 _p ]
which we may then use to decompose :
=
[ _c _cp; _pc _p ]^-1 =
[ _c -_c _cp; 0 _p ][ _c 0; 0 _p ][ _c 0; -_pc_c _p ]
The desired result then follows by substitution into ^T:
^T =
[ _c _p ][ _c -_c _cp; 0 _p ][ _c^-1 0; 0 _p^-1 ][ _c 0; -_pc_c _p ][ _c; _p ]
=
_c^T _c _c +
(_p - _pc_c _c)^T _p^-1 (_p - _pc_c _c)
Recall from section 3.6 that η may be expressed as a trace:
η = [^-1^T^-1^T]
where is the derivative of the loss with respect to the outputs (i.e. layer pre-activations) of the linear transform.
We can use lemma <ref> to write the increase in natural expansion score Δη as
Δη = [^-1^T^-1^T]
- [^-1_c_c^-1_c ^T]
= [^-1 ()^T_p^-1() ^T]
where we can take inside the expectations by linearity.
It is computationally convenient for us to be able to have an expression in terms of residual gradients instead of residual activations, so we note the following:
()^T = _r _p^T
where _r = - _c^T_c^-1_c is the residual gradient.
()^T = (_p - _p _c^T_c^-1_c)^T
= _p^T - _c^T_c^-1_c _p^T
= ( - _c_c^-1_c) _p^T
= _r _p^T
Finally, we establish the following relationship between _p^-1 and _p^-1:
_p^-1 - _p^-1= (_p - _pc_c _cp)^-1 - _p^-1≽ 0.
The matrix inverse _p^-1 can be expanded as the following power series
_p^-1 = (_p - _pc_c _cp)^-1 =
∑_n=0^∞_p (_pc_c _cp_p)^n
We observe that this is a sum of positive semi-definite matrices, and truncate the series at n=0 and rearrange:
_p^-1 - _p = ∑_n=1^∞_p (_pc_c _cp_p)^n ≽ 0
We may now prove theorem 2 from section <ref>.
Δη' is a lower bound on the increase in natural expansion score Δη due to the addition of some proposed neurons p:
Δη≥Δη' = [^-1_r _p^T_p^-1_p _r^T]
Substituting lemma <ref> into corollary <ref> we have
Δη = [^-1_r ^T_p^-1_r^T].
The difference between Δη and Δη' is given by
Δη - Δη' = [^-1_r ^T (_p^-1 - _p^-1) _r^T].
This is the squared norm of _r ^T as a vector according to the Kronecker product
^-1⊗ (_p^-1 - _p^-1).
The first factor is positive semi-definite by assumption, the second by lemma <ref>,
and the Kronecker product of positive semi-definite matrices is positive semi-definite.
Therefore Δη - Δη' ≥ 0 and so Δη≥Δη'.
The significance of this lower bound on Δη is that _r and ^-1 may be computed once,
and then used to optimize very many proposals with different activations _p.
That is, performing N steps of gradient descent to optimize proposed neurons p scales linearly in the evaluation cost of _p and _p^-1.
These linear costs are unaffected by the number of neurons currently in the layer being added to,
and unaffected by the total number of layers in the network.
§ THE CONSEQUENCES OF NON-FISHER CURVATURE FOR TOTAL NEURONS ADDED
In section 3.5 we discussed the total number of neurons added during training, and in particular the extent to which we could provide bounds on this.
As noted there, in the case where the Fisher is constant over training and exactly equal to the hessian,
the dynamics of training are very simple.
The loss L has its global minimum at the point reached by a step of exactly ^-1,
and it can be seen by integration that the reduction in loss due to such a step is exactly Δ L = 1/2^T ^-1 = 1/2η.
The stopping criterion α corresponds to the requirement that parameter expansions should enable a further reduction in loss of at least 1/2α.
Since η≤λ is bounded by λ, the maximum possible reduction in loss is Δ L_max = 1/2λ.
If we pessimistically assume that every parameter expansion enables the minimal loss reduction of only 1/2α,
then the total number of added neurons N_T is still bounded by N_T < λ/α.
The case where the true hessian of the loss is some constant multiple of the Fisher = κ which is itself constant,
is almost as simple.
The parameters evolve along the same trajectory, only they move a factor of κ faster than they would if =.
This also results in a rescaling η = κη_B of natural expansion scores relative to the baseline value η_B in the case where was accurate.
While this has no effect on the behaviour of the expansion threshold τ,
the inflated η values mean that the effective value of α is reduced by a factor of κ
and so the total number of added neurons N_T is now only bounded by N_T < κλ/α.
We will now try to describe the effect of more general failures of to represent the true curvature .
Local expansion behaviour, i.e. without further parameter optimization, is bounded by lemma <ref> of appendix <ref>.
Assuming the baseline case of =, we may substitute λ = 2Δ L_max.
If we assume small step sizes, the rate of loss reduction L = -η is given by the natural expansion score by definition,
regardless of .
If at all times t during training the rate of reduction of expansion score -η(t) < -η_B(t) is lower than the baseline scenario,
then η will at all times be greater than expected.
Since the rate of loss reduction L = η is given by η, L will decrease faster than expected and the remaining maximum possible loss reduction Δ L_max will be at all times less than expected.
It can be seen from lemma <ref> that discrepancies in these directions relative to baseline will result in fewer additions being made.
We now only need to establish conditions under which the actual rate of reduction in η is lower than the expected rate.
The rate of change during optimization (indicated by overdot) of the various components of η can be described as follows:
= -^-1
= = -^-1
^T ^-1 = -^T ^-1^-1
η = -^T ^-1 - ^T ^-1 - ^T ^-1^-1
= -^T ^-1( 2 + ) ^-1
Since in the base case _B = and _B = 0, we have that
if + 1/2≼
then -η≤ -η_B.
Putting the above results together, we have that if at all times during training + 1/2≼,
then the bound on total additions N_T < λ/α should hold.
Incorporating the previous result regarding = κ,
it also appears that if at all times + 1/2≼κ, then N_T < κλ/α.
Assuming positive definite and the loss surface smooth (i.e. and finite),
then there will exist some finite κ for which the condition holds and so N_T will be bounded.
§ HYPERPARAMETERS AND IMPLEMENTATION DETAILS
All experiments were run on a single Nvidia A100 or V100 GPU, using no longer than one day each.
Our implementation uses the JAX <cit.> autodifferentiation and Flax <cit.> neural network libraries.
The full source code used to run the experiments is provided in the supplementary material, and will be made publicly available on publication of this work.
In all experiments we optimize our parameters via natural gradient descent with a learning rate of 0.1 and Tikhonov damping of magnitude 0.1.
In the image classification experiments we use batches of size 1024 and a weight decay of rate 0.001.
We initialize our dense layers with the default initialization of Flax (LeCun Normal) <cit.>,
and use a unit normal initialization for the parameters of our rational functions.
For the visualization experiments we use τ=2, for the image classification experiments we use τ=1.007 and τ=1.03
for the whole dataset and variable subset experiments respectively.
Larger thresholds τ result in longer training times but more conservative network sizes and higher accuracy of η estimates due to being a closer approximation to the curvature near convergence on the existing parameters.
Any extra costs are negligible for the visualization experiments, so we use the intuitive value of 2,
but we choose τ values for the image classification experiments in light of this natural trade-off.
We use α=0.0025 for all experiments apart from the whole dataset image classification, for which we use α=0.25.
Here the latter choice compensates for larger noise in Δη' introduced by use of a validation batch, as will be discussed shortly.
We adjust the expansion score increases for layer additions by a constant factor of 2 in the visualization experiments and 60 in the image classification experiments.
These values are selected to be within an order of magnitude of the actual layer sizes expected in classification of a toy dataset versus images, and so of the number of new neurons a new layer represents.
We calculate the natural gradient via the conjugate gradient method with a maximum iteration count of 100 when optimizing the existing parameters.
When optimizing the initializations of proposed neurons or layers we use the Kronecker factored approximation of the Fisher matrix for the relevant layer
based on derivatives of the predictions of the network as in <cit.>.
We compute Δη' based on this and normalize it with respect to the output gradient magnitudes of the particular task.
When comparing Δη' / η_c to τ we use the η_c value given by for the layer in question.
When considering adding layers, we ensure new layers are invertible by adding a regularization term of 0.01(ln)^2 when optimizing the initialization of their linear transform ,
and by setting the minimal singular values of to be at least 0.001 times its average singular value before adding the layer to the network.
In our visualization experiments we do not use batching, so we consider adding depth and width every 30 steps,
and add at most one layer per 90 steps.
In the image classification experiments we use batching and so consider adding width and depth every 10 epochs,
adding at most one layer each time.
We use the same scheme for initializing proposed new neurons or layers as for initializing the starting network.
In our whole dataset image classification experiment we then optimize proposal initializations to maximize Δη' via 300 steps of vanilla gradient descent
on a fixed batch of 1024 images.
We consider 10000 neuron proposals and 100 layer proposals per location, and use a learning rate of 0.3,
reducing this by a factor of 3 as necessary to maintain monotonic improvement in Δη' for each proposal.
We take the best proposal on this batch of size 1024 for each depth and width addition location,
and reevaluate its Δη' on a fixed validation batch of size 1024 when deciding whether and where to add.
The variable degree of overfitting of the best proposal results in some noise in Δη' at each location which we compensate for by choosing a relatively large α.
For our other experiments we optimize proposal initializations using 3000 steps of the Metropolis Adjusted Langevin Algorithm (MALA) <cit.>,
using a unit gaussian prior on initializations during these steps.
We use a temperature T of 10 and an initial step size of 0.3, and adjust by a factor of 3 every 10 steps if necessary to maintain an acceptance rate of around 0.6.
We consider 100 width proposals and 100 layer proposals for each location,
and obtain 100 final MALA samples i for each location width could be added and each location depth could be added.
We then construct a categorical distribution over each set of 100 samples via (1/TΔη_i'),
and use the corresponding expectation of Δη' when deciding when and where to add capacity and whether it should be depth or width.
We draw initializations for new capacity from this categorical distribution,
except in the initial least squares regression experiment, where we use _i Δη_i' over the 100 samples i to make figure 2 more intuitive.
|
http://arxiv.org/abs/2307.05861v1 | 20230712010811 | DeepMapping: The Case for Learned Data Mapping for Compression and Efficient Query Processing | [
"Lixi Zhou",
"K. Selçuk Candan",
"Jia Zou"
] | cs.DB | [
"cs.DB"
] |
Arizona State University
Tempe
USA
[email protected]
Arizona State University
Tempe
USA
[email protected]
Arizona State University
Tempe
USA
[email protected]
Storing tabular data in a way that balances storage and query efficiencies is a long standing research question in the database community.
While there are several lossless compression techniques in the literature,
in this work we argue and show that a novel Deep Learned Data Mapping (or DeepMapping) abstraction, which relies on the impressive memorization capabilities of deep neural networks, can provide better storage cost, better latency, and better run-time memory footprint, all at the same time.
Our proposed DeepMapping abstraction transforms a data set into multiple key-value mappings and constructs a multi-tasking neural network model that outputs the corresponding values for a given input key.
In order to deal with the memorization errors, DeepMapping couples the learned neural network with a light-weight auxiliary data structure capable of correcting errors. The auxiliary structure further enables DeepMapping to efficiently deal with insertions, deletions, and updates, without having to re-train the mapping.
Since the shape of the network has a significant impact on the overall size of the DeepMapping structure, we further propose a multi-task hybrid architecture search strategy to identify DeepMapping architectures that strike a desirable balance among memorization capacity, size, and efficiency.
Extensive experiments with synthetic and benchmark datasets, including TPC-H and TPC-DS,
demonstrated that the proposed DeepMapping approach can significantly reduce the latency of the key-based queries, while simultaneously improving both offline and run-time storage requirements against several cutting-edge competitors.
DeepMapping: The Case for Learned Data Mapping for Compression and Efficient Query Processing
Jia Zou
August 12, 2023
=============================================================================================
§ INTRODUCTION
Real-time computations are increasingly pushed to edge servers that have limited computational and storage capabilities compared to cloud environments. In-memory computing is also popular on edge platforms to ensure real-time response. As a result, it is critical to balance storage(e.g., disk size, memory footprint) and computational costs (e.g., query execution latency) on these platforms. However, existing solutions are not sufficiently effective in integrating compression and indexing techniques to achieve low storage costs and low query latency at the same time: (de)compression operations are usually computational-intensive and indexing techniques impose additional overheads for storing the indexing structure.
In this work, we argue for a novel data abstraction, called Deep Learned Data Mapping (or DeepMapping), which leverages deep neural networks to seamlessly integrate the compression and indexing capabilities.
This is illustrated in Figure <ref> with a sample scenario, where a tabular dataset is represented as two mappings from the key to the attributes and , respectively. In DeepMapping, these two mappings are stored as one multi-tasking neural network which takes as input feature and outputs and as labels.
As we show in the paper, DeepMapping achieves better storage cost and runtime latency at the same time, with
one-time, fully automated model search and training. Most importantly, DeepMapping achieves zero accuracy loss and provides flexibility in query/scan/update/insertion/deletion using a novel auxiliary structure.
The proposed DeepMapping abstraction can potentially be applied to a broad class of database problems that rely on the key-value data, such as hash maps that are used in hash-based join and aggregation, and key-value stores, a critical component of modern data management and processing platforms[
While DeepMapping can also support more complex data operations such as join and aggregation, we leave the investigation of complex query processing as future work.].
§.§ Opportunities in Learned Data Mapping
As outlined above, this work explores the possibility of leveraging the impressive memorization capabilities of neural networks <cit.> for key-value based maps for compression and efficient query processing.
In developing DeepMapping, we are motivated by several opportunities brought by deep neural network models:
∙ Compressibility for size reduction.
Deep learning models usually have significantly smaller sizes compared to its training datasets. For example, the common crawl data set has 2.6 billion of tuples and is 220 Terabytes in size <cit.>, yet the embedding model trained on this dataset, using the language-agnostic BERT sentence embedding model, is just 1.63 Gigabytes in size <cit.>.
This provides opportunities for significant gains in storage complexities.
Naturally, however, compressibility is a function of the statistical properties of the data, such as the underlying key-value correlations (Figure <ref>).
In this work, we investigate how these affect the resulting model size.
∙
Hardware acceleration opportunities. In general, (batched) inference computations of a neural network can be accelerated using hardware such as Graphics Processing Units (GPU) processors, particularly for large-scale models and large batches.
This provides opportunities for significantly improving the query throughput.
While there exist related works exploiting these opportunities for efficient indexing, approximate query processing, and compression (See Section <ref>), DeepMapping are facing unique challenges.
§.§ Challenges
∙
Accuracy. Although the universal approximation theorem <cit.> states that given a continuous function defined on a certain domain, even a neural network with a single hidden layer can approximate this continuous function to arbitrary precision, it is difficult to derive a tight bound for the accuracy of the learned data mapping <cit.>. Moreover, many database applications in production do not accept approximate query results <cit.>. So the first challenge is
to provide 100% accurate query results.
∙ Updates. Although simple key-based queries, such as , can be implemented as inference operations over neaural models, it is not straightforward to implement , , . Particularly,
relying on incremental model (un)learning for above operations
can result in "catastrophic forgetting" issues <cit.>. Moreover, retraining the model upon every batch of modification is also infeasible due to the non-trivial training overheads. So the second challenge is
to implement modification queries.
∙ Model Search. The memorization performance of the DeepMapping depends on the underlying neural network model. Obviously, it would be difficult and laborious for developers to manually search for architectures to replace tabular data. Therefore, the third critical challenge is
to automate and optimize the model search process.
These challenges are formalized as desiderata in Section <ref>.
§.§ Our Contributions: DeepMapping
To address these challenges, we introduce a novel DeepMaping approach, outlined in Figure <ref> (and detailed in Section <ref>).
TODO: Lixi, the following two [paragraphs] are all great points, we need to merge these points with the introduction.
[Data compression can be leveraged to reduce storage overhead, bandwidth for communication, and also the time of data transferring. Meanwhile, to read or look up the data, decompression is needed first and it brings additional overhead for the end-end latency and reduces the performance compared with uncompressed data. By compressing the data using a neural network, since the data is memorized in the neural network, there is no need for decompression and the data can be queried by running the inference. Also, running the inference can serve as an indexing then querying while it can greatly improve the performance and it does not require any additional storage overhead for indexing. Furthermore, we can utilize the GPU to further boost the performance. ]
[It requires a much more complex structure (significantly more neurons and model parameters) to memorize the data with 100% accuracy. However, a comparatively simple neural network can easily memorize a significant portion of the data.]
(1) A Novel Hybrid Data Representation (Sec. <ref>)
As outlined above,
a relatively simple neural network can often memorize a large portion of the data, but achieving the last-mile accuracy requires a large and complicated model <cit.>, which can be counter-productive for our purposes.
Moreover, when querying non-existing data, a neural network may be incapable of telling whether this record exists or not.
To tackle these challenges, instead of trying to increase the model size to achieve the last-mile accuracy, we propose to couple a relatively simple and imperfect neural network with a light-weight auxiliary structure that manages the mis-classified data
and maintains an existence indexing:
∙
A Compact, Multi-Task Neural Network Model trained to capture the correlations between the key (i.e. input features)
and the values (i.e. labels) of a given
key-value data set.
To memorize the data with multiple attributes, we propose to train a multi-task neural network, where each output layer outputs the value for its corresponding attribute.
The query answering process is implemented through batch inference, where the model takes a (batch of) query key(s) as input, and outputs the predicted value(s).
∙
An Auxiliary Accuracy Assurance Structure compresses the part of the mappings that are mis-classified by the model, as well as a compact snapshot of the keys, to ensure the query accuracy.
While the neural network memorizes most of the data, the auxiliary data structure memorizes a small amount of mis-classified data to achieve 100% overall accuracy on data query;
an additional bit vector is used to record the existence of the data, which can also be leveraged to implement insertion, deletion, and updates.
(2)
Multi-Task Hybrid Architecture Search (MHAS)
(Sec. <ref>)
In order to meet the compression and latency performance desiderata, to be formalized in Section <ref>,
the hybrid architecture needs to meet a set of goals: (a)
The architecture should maximize the sharing of model layers/parameters across the inference tasks corresponding to different mappings. (b) The
model should specialize well for tables with attributes that have heterogeneous types and different distributions.
To help identify an effective hybrid architecture, we propose a directed acyclic graph (DAG) abstraction of the search space and a learning based multi-task search strategy that adaptively tunes the number of shared layers and private layers to balance the sharing of parameters across inference tasks, while simultaneously
supporting specialization for each task.
(3) Workflows for Insertions, Deletions, and Updates (Sec. <ref>)
To address the challenge of supporting data modifications, we propose a lazy-update process which re-purposes the light-weight auxiliary structure outlined above by materializing the modification operations in this structure, if the model cannot capture those modifications. The system triggers retraining of the neural network model only when the size of the auxiliary structure exceeds a threshold.
Other Data Operations. While, in this paper, we focus on mapping and data updates, we note that the proposed data structure can also support more complex operations.
For example, s can be implemented efficiently by running a batch inference over the samples in a given key range and then the existence index can be used to prune the inference results. We leave the investigation of more complex data operations and query processing as future work.
We implemented the proposed approach and assessed its effectiveness in various contexts. We have conducted extensive experiments with synthetic and benchmark datasets, including TPC-H and TPC-DS.
The evaluation results demonstrated that the proposed DeepMapping approach provides a better balance among memorization capacity, size, and efficiency against several cutting-edge competitors, such as Z-Standard <cit.>, Run-Length-Encoding (RLE) <cit.>, LZO <cit.>, Domain-Guided Prefix Suppression (DGPS) <cit.>, Byte Dictionary <cit.>, and Delta Compression (DComp) <cit.>. It
significantly reduces the latency of the key-value mapping queries, while simultaneously improving both offline and run-time storage requirements.
§ RELATED WORKS
Learned Indexing.
Driven by the desirable properties of the neural networks outlined in Section <ref>, in recent years, several deep learning based techniques, commonly known as learned index structures<cit.>, have been proposed to improve the computational efficiency of indexing structures.
These techniques apply machine learning to capture the correlation between the keys and the positions of the queried values in underlying storage. There also exist works <cit.> that use a machine learning model to replace the hash function for hash indexing. While these techniques achieved better storage and query efficiency compared to existing indexing techniques,
the problem we target in this paper is different from learned indexes in several critical ways:
* Generally speaking, learned indexing predicts positions in a (sorted) array, which is commonly posed as a regression task. We, however, are aiming to learn a direct key-value mapping – moreover, the values can be discrete or categorical.
* For learned indexing, if the queried value is not found in the predicted position, the search can continue, as usual, on the (sorted) array until the value is found or determined as non-existing. This is not possible for learning a direct key-value mapping.
* Learned indexing only compresses the indexing structure, but will not compress the data. This is a key difference with our proposed technique, which combines both lossless data compression and indexing and strikes an even better trade-off among overall storage efficiency and retrieval efficiency.
Learned Approximate Query Processing (AQP).
Recent works <cit.> applied learning-based techniques to improve AQP or AQP with differential privacy <cit.>. Their solution and theoretical bounds only work for range aggregation queries. Different from their works, we focus on lossless query processing by leveraging a novel auxiliary structure that manages mis-classified data. While this work focuses on key-based look-up queries, it is easily extensible to more complicated queries. For example, a operation can be performed as a batch inference based on the existence index. The key-based and queries can also use our hybrid structure as a hash-table.
Compression.
Abundant works <cit.> focus on the lossless compression of high-dimensional data using a deep learning approach. The idea is to choose a statistical model that closely captures the underlying distribution of the input data and develop a scalable compression/encoding algorithm that exploits the statistical model. These works do not maintain the key-value mappings and it thus cannot accelerate the queries on top of the data. Different from these works, our work focuses on balancing the data compression ratio and data retrieving efficiency.
§ PROBLEM AND DESIDERATA
We first formulate the problem and desiderata as follows:
* Single-Relation, Single-Key Mapping. In a relation, a key may consist of multiple attributes.
Let R(K_1, …, K_l, V_1, …, V_m) be a relation where K=(K_1, …, K_l) represents a key that consists of l attributes and V_1 through V_m are m value attributes. The goal is to identify a mapping data structure, d_μ() which, given a key k=(k_1, …, k_l) ∈ domain( K), and a target attribute V_i ∈{V_1, …, V_m}, it
returns
d_μ(k, V_i) =
π_V_i(σ_K_1=k_1 ∧…∧ K_l=k_l(R)). In this work, due to space limit, we focus on this problem, while our approach is extensible to the following two problems.
* Single-Relation, Multiple-Key Mapping.
Note that in practice a relation may have multiple candidate keys in addition to the primary key, and it may contain one or more foreign keys with each foreign key linking to the primary key of another table. We further represent the collection of keys as 𝒦=( 𝒦^1, …, 𝒦^𝓈). Therefore, we more generally seek a multiple-mapping data structure, d_μ() which takes a key k^i=(k_1, …, k_l) ∈ domain( K^i∈𝒦)
along with a target value attribute V_j ∈{V_1, …, V_m},
it returns d_μ(k^i, V_j) =
π_V_j(σ_K^i_1=k_1 ∧…∧ K^i_l=k_l(R)).
* Multiple-Relation, Multiple-Key Mapping.
Furthermore, in many contexts, such as databases with star schemas, the same attribute can serve as key for one relation (e.g., the fact table) and serve as foreign key for other relations (e.g., the dimension tables). Let us consider ℛ = {R_1, …, R_r} be a set of relations. In this case, we seek a multiple-mapping data structure, d_μ(), which takes a key k^i=(k_1, …, k_l) ∈ domain( K^i∈𝒦), a target value attribute V_j ∈{V_1, …, V_m},
along with a target relation R_u ∈ℛ,
and it returns d_μ(k^i, V_j, R_u) =
π_V_j(σ_K^i_1=k_1 ∧…∧ K^i_l=k_l(R_u)).
For any of these three alternative problem scenarios, our key desiderata from the mapping data structure, d_μ(), are as follows:
* Desideratum #1 - Accuracy: d_μ() is accurate – i.e., it does not miss any data and it does not return any spurious results.
* Desideratum #2 -Compressibility: The offline disk storage space and runtime memory footprint of d_μ() structure are small.
* Desideratum #3 -Low latency: The data structure d_μ() is efficient and, thus, provides low data retrieval latency.
* Desideratum #4 - Updateability: d_μ() is updateable with insertions of new key-value rows and deletions of some existing keys from the database. Moreover, the data structure also allows changing the value of an existing key.
§ DEEPMAPPING ARCHITECTURE
In DeepMapping, we encode the mapping from each key to each column as an inference task leveraging a neural network model that predicts, given a key, the corresponding column as label.
§.§ Shared Multi-Task Network
In this paper, we are targeting key-value mappings where the key and values are discrete, such as integers, strings, categorical values. Without loss of generality, we consider a sequence of fully connected layers as the underlying neural network architecture, where the strings or categorical data
are encoded as integers using one-hot encoding before training and inference.
In order to achieve high compression rates, some of these layers can be shared across multiple inference tasks within a relation and across relations that have foreign key reference relationships. At the same time,
other layers may be private to each individual inference task to improve the accuracy of that particular inference task.
Considering the example as illustrated in Figure <ref>,
we train a multi-layer fully connected neural network with two output layers, one for the Order_Type column and the other for the Order_Status column, respectively.
The first few layers of the neural network, which capture the structures common to both inference tasks, are shared; the latter layers of the neural network, which capture the output attribute specific structures, on the other hand are private to specialize the network for each individual output attribute.
§.§ Ensuring 100% Accuracy (Desideratum #1)
The first and the most important, of our four desiderata, listed in Section <ref>, is accuracy: i.e., DeepMapping should not miss any data and should not return spurious results.
Why not simply try to overfit the data? As we discussed earlier, in theory (by the universal approximation theorem <cit.>) a sufficiently large network should be able to perfectly memorize and given data.
In most machine learning applications, this is referred to as overfitting and is in fact undesirable:
* A model that is overfitting is often unnecessarily large (and hence expensive to train and infer). In order to achieve the last-mile accuracy, e.g., to improve the accuracy from 90% to 100%, the required neural network model size will significantly increase, often by multiple times <cit.>.
* Models that are overfitting often do not generalize to unseen data, which makes them ineffective in generalizing to the prediction tasks for insertions.
Therefore, seeking an arbitrarily large model to achieve 100% accuracy is not appropriate, considering the desiderata include updateability, compressibility and low latency.
Can we prevent models from hallucinating when a key is missing?
In addition, when querying the non-existing data, it is challenging for the neural networks to accurately tell whether the tuple exists or not.
This is because, the inference task will predict an output even when the data is not seen in the input – this makes the neural network based solutions useful for generalizing, but in our context, any such output will be spurious and will need to be avoided, as a hallucination.
Solution: Lightweight auxiliary Accuracy Assurance Structures.
To address these issues, we propose to use a novel hybrid data representation, consisting of (1) a compact neural network model that memorizes the
data (denoted as M), (2) an auxiliary accuracy assurance table (denoted as T_aux) that compresses any misclassified data, and (3) an existence bit vector (denoted as V_exist), whose range corresponds to the key range – intuitively, each bit marks the existence of the corresponding key.
§.§.§ Encoding of the Auxiliary Structures
The auxiliary accuracy assurance table, T_aux, is filled by running all the keys in the input key-value mapping through the trained model and checking whether the inferred result is matching the correct output values – if not (i.e., if the key-value mapping is mis-learned) the corresponding key-value pair is stored in the auxiliary table.
The mis-learned key-value pairs are sorted by the key, the domain of keys is equally partitioned into p partitions, and each mis-learned key-value pair is stored in the corresponding partition in sorted order. To further reduce the storage overhead, we apply Z-standard compression<cit.> on each partition before they are stored.
In this work, we focus on keys that consist of integers and use a single dynamic bit array to serve an existence index for the keys, denoted as V_exist, where It is also compressed
using Z-standard <cit.>. In addition, the decoding map (denoted as f_decode) that converts predicted labels from an integer codes (resulted from one-hot encoding) to its original format, is also part of the auxiliary structure.
§.§.§ Lookup Process
The look-up process using the proposed hybrid data representation
is illustrated in Algorithm <ref> - the key aspects of the process are outlined below:
Inference. DeepMapping allows for simultaneous querying of a batch of encoded keys, leveraging parallel inference
using the GPU.
Existence check.
While the GPU is busy with the inference, we simultaneously check, using the CPU, the existence of each query key in the V_exist data structure to eliminate any spurious results.
Validation.
For any key that passes the existence check, we check using CPU the auxiliary table, T_aux, to see whether this key was mis-learned by the model.
To look up the query key in the auxiliary table, we first locate its partition, bring it to the main memory, and decompress it; we then apply a binary search to look up the value within the partition. If the query key is located in the auxiliary table, it means that the key-value pair was mis-learned by the model and we return the value from the auxiliary table.
Otherwise, the model's output is returned as the result.
Note that we provide two different strategies to handle the decompression of the partitions (line no.6 in Algoritm <ref>) to facilitate the validation process:
* Memory-optimized validation strategy. In memory constrained devices, we free up the space of the last decompressed partition before bringing the next partition of the auxiliary table to memory in order to keep the run-time memory usage low. In this case, query keys in a batch are sorted before validation so that each partition is decompressed only once for each query batch.
* Latency-optimized validation strategy.
The latency-efficient strategy caches all the decompressed partitions of the auxiliary table. These cached partitions can be used for future queries and this would eliminate the need to read the partitions from secondary storage and decompress them. This reduces the lookup times, but requires more memory for caching.
Note that, while their primary benefit is to ensure 100% accuracy without necessitating a prohibitively large model, as we see in Section <ref>, these auxiliary structures also enable // operations on the data.
§.§ Multi-Task Hybrid Architecture Search (MHAS) (Desiderata #2, 3)
A critical part of the data registration process is to select a neural network architecture to fully memorize the data while achieving the desiderata listed in Section <ref>.
Let us be given a dataset R that consists of n tuples, where each tuple is represented as <x, y>. Here, x represents a collection of one or more attributes that serve as the key so that each key uniquely and minimally identifies a tuple. y represents a collection of m attributes that are disjoint with x and serve as the values. Our goal is to identify a
hybrid data representation, M̂ = ⟨ M, T_aux, V_exist, f_decode⟩, consisting of a neural network model, M along with the auxiliary structures, T_aux, V_exist, and f_decode, satisfying our desiderata.
As aforementioned in Sec. <ref>, we consider a multi-layer fully connected neural network to memorize a significant portion of the data in the given relation(s).
As depicted in Figure <ref>, some of layers (which help abstract the key) are shared across multiple data columns, while some others may be private to each output attribute.
Therefore, the problem needs to be posed as a multi-task model search problem.
Neural architecture search is an active research area <cit.>.
A key distinction of our work is that, instead of searching for a neural network, we are searching for a hybrid data structure, which as a whole achieves the key desiderata.
To develop our multi-task hybrid architecture search (MHAS) strategy, we build on the efficient neural architecture search (ENAS) strategy <cit.>, which
supports parameter sharing across sampled model architectures.
While the original motivation of this parameter sharing is to improve the efficiency of NAS by forcing all sampled child models to share weights to eschew training each model from scratch, in this paper,
we argue that this approach can also
help
reduce the model search overhead
in multi-task learning by
encouraging parameter sharing across multiple tasks.
Note that previous works on transfer and multi-task learning has shown that parameters learned for a model on a given task can be used for other tasks with little to no modifications <cit.>. This observation also underlies the current success and wide-spread use of large pre-trained models, such as large language models <cit.>, across diverse tasks.
In order to achieve our desideratum of identifying a small network well-tailored to our data, the MHAS search strategy must distinguish shareable and non-shareable layers across the different tasks.
At the surface, similar to ENAS, our MHAS strategy consists of two components:
a search space, which defines how the components of the underlying neural network can be connected to each other to enumerate new neural architectures, and
a controller algorithm which seeks a structure for the target model within this search space.
The MHAS search algorithm, however, differs from ENAS in several ways: (a) MHAS is extended with the ability to
search for multi-task models, with shareable and non-shareable layers.
(b)
To balance compressibility brought by shared layers and accuracy brought by the private layers, MHAS searches within a search space without fixed numbers of hidden layers among the shared layers and private layers for each task.
(c) Since our goal is not to search for a neural network, but a hybrid structure, the search is governed by an objective function that captures our desiderata.
We next
formalize the search space used by our algorithm in Section <ref> and we then present
the controller algorithm
in Section <ref>.
§.§.§ MHAS Multi-Task Search Space
We represent the search space as a two-level tree, where the root corresponds to the shared layers and each leave represents the private layers for a target column.
Figure <ref> illustrates a high-level view of the search space of a model designed to infer a relation with three value columns. In this example, the search space consists of a tree with four nodes: one for the shared layers (capturing the characteristics of the key) and three nodes for the private layers, each corresponding to a target value column.
As we discussed earlier, in this work, as the underlying neural architecture, we consider fully-connected layers[
As we also validate through experiments, tabular data can be represented well using fully connected layers <cit.>.
] (with different number of neurons).
Each node of the tree is represented by a direct-acyclic graph (DAG). A DAG contains a node representing the input layer, a node representing the output layer, and multiple nodes representing candidate hidden layers.
As illustrated in Figure <ref>, the DAG represents a search space that contains up to two hidden layers.
Edges of the DAG represent all possible data flows among these layers (with each edge corresponding to a specific model parameter tensor that connects the two layers).
Given this, the model search process can be posed as enumerating
subgraphs of the DAGs by activating and deactivating network edges – each activated directed edge represents a connection between two neural network layers. Deactivated edges represent connections that do not exist in that particular network. In Figure <ref>, red color is used to highlight activated edges: in the example, the subgraph represented with the red edges corresponds to a sampled model with a single hidden layer between the input and output layers.
§.§.§ Multi-Task Model Search Controller
The DeepMapping multi-task model search algorithm is outlined in Algorithm <ref>.
Since the goal is to learn a sequence of nodes (and train the model parameters at each edge), as in ENAS <cit.>, the DeepMapping controller is constructed as a long short-term memory (LSTM) neural network architecture.
The LSTM architecture samples decisions via softmax classifiers in an autoregressive fashion to derive a sequence of nodes in the DAG.
The
algorithm runs over N_t iterations – at each iteration, the algorithm
alternatively
trains the controller parameter θ in a controller training iteration or the weights of the sampled neural architecture M through a model training iteration:
* During a controller training iteration, the weights, W, of the candidate layers in the search space is fixed. For each batch of the data, the controller samples a model from the search space and updates the controller parameter, θ, through the loss function, ℒ(M̂, D) (D represents the dataset to be compressed):
size(M) + size(T_aux) + size(V_exist) +size(f_decode)/size(D).
The controller samples a model by taking a DAG node as input and picking the next DAG node among the ones that connect to it; the selected node serves as the input for the next iteration of the process. This process repeats until the output node is selected or the maximum number of steps have been used.
* During a model training iteration, DeepMapping trains the weights of the sampled neural architecture in m_epochs by fixing the controller parameters (θ). We use standard cross entropy <cit.> as the loss function to update model weights M. Since the sampled neural architecture is a subgraph of the search space, and these layers may be sampled again in future iterations, each model training iteration may contribute to improving the accuracy and convergence rate of future model training iterations by sharing parameters across iterations.
Note that each training iteration is ran for size(D)/size(D_batch) training steps.
Since, the memorization task may in practice need a larger number iterations to stabilize to a desired level,
usually we choose N_m > N_c, where N_m is the number of model training iterations and N_c is the number of
controller training iterations.
§.§ DeepMapping Updates (Desideratum #4)
Neural networks are notoriously challenging to be updated:
(a) incremental training tends to reduce the accuracy for existing data;
(b) it is difficult to tell the neural network to forget something that is already learned; and
(c) retraining incurs significant latency and thus it is often not an acceptable solution to handle updates to the data.
Therefore, to support data modification operations, //, we piggy-back on the auxiliary structure that we have described in Section <ref>:
Insertions.
Given a collection of key-value pairs (D_insert) to be inserted into the DeepMapping's hybrid data structure, we first check whether the model (M) can generalize to them by running inference over the keys for the target columns. Only those key-value pairs that are incorrectly inferred need to be inserted into the auxiliary table (T_aux). The bit vector, V_exist is updated correspondingly. The insertion process is formalized in Algorithm <ref>.
Deletions. The deletion process is relatively straightforward as it can be implemented simply by marking the corresponding bit as 0 in the bit vector, V_exist. This is illustrated in Algorithm <ref>.
Updates. We treat updated key-value pairs (D_update) as mis-memorized data and insert the new values into the auxiliary table (T_aux).
Since the keys already exist (otherwise, the process would be an insertion), we do not need to update the existence index. The process is illustrated in Algorithm <ref>.
DeepMapping re-trains the model and reconstruct the auxiliary structures on the underlying data to optimize the compression ratio and query efficiency, only when the auxilary table becomes too large to satisfy the Desiderata listed in Section <ref>.
§ EVALUATION
We implemented the DeepMapping hybrid structure, model search, and look-up and modification query execution^<ref>.
In this section, we evaluate the effectiveness of DeepMapping using
TPC-H <cit.>, TPC-DS <cit.>, with different scale factors (1 and 10), as well as a dataset synthesized from TPC-H and TPC-DS.
Since, we are targeting data where the key consists of one or more integers and the values are categorical.
We do not consider tables and columns that do not satisfy these conditions.
Experiments are run under a Ubuntu 20.04 machine which is configured with 24 CPU Cores, 125 GB Memory, and 1 Nvidia-A10 GPU with 24 GB Memory.
§.§ DeepMapping Parameters
In the experiments, we use the following default configurations:
(a) partition size for the auxiliary accuracy assurance table: 1MB;
(b) total number of iterations (N_t) for the MHAS model search: 2000;
(c) number of model training iterations (N_m): 2000;
(d) number of controller training iterations (N_c): 40; and
(g) latency-optimized validation strategy for decompressing partitions.
We consider an MHAS search space consisting of DAGs with up to two shared hidden layers and two private hidden layers for each task.
auxiliary. As in <cit.>, the controller is constructed as a long short-term memory (LSTM) neural network,
with 64 hidden units.
During MHAS, the controller parameters, θ, are initialized uniformly in 𝒩(0, 0.05^2) and trained with Adam optimizer at a learning rate of 0.00035. Each model training iteration uses a learning rate of 0.001, decayed by a factor of 0.999.
In each iteration, the model has been trained for 5 epochs
with batch size of 16384; the controller is trained for 1 epoch every 50 iterations
with batch size of 2048.
§.§ Competitors
We compare DeepMapping with lossless compression mechanisms, Domain-Guided Prefix Suppression (DGPS) <cit.>, Byte Dictionary <cit.>, Z-Standard <cit.>, Run-Length-Encoding (RLE) <cit.>, LZO <cit.>, and Delta Compression (DComp) <cit.>.
DeepMapping has been implemented in Python with the latest numpy, bitarray, tensorflow, pytorch, zstd libraries. For LZO and Z-Standard, we used public software <cit.>. Others were implemented by our team using the software platform as DeepMapping.
To be consistent, we set the partition size to be compressed to 1MB for all algorithms.
§.§ Query Generation and Evaluation Criteria
The evaluation criteria considered in this Section are selected based on the Desiderata listed in Section <ref>.
∙ Latency and Storage
We consider the size of the key-value map on the disk, including the neural network model size and the size of the lightweight auxiliary table, and the run-time memory requirements. We also consider end-to-end latency for data querying.
Latency and run-time memory usage are measured by considering 5 queries, each looking up B randomly selected keys.
∙ Data Manipulation
(1) and : For these cases, we randomly select X% of data for revision.
(2)
: For insertions, we put aside Y% of the data, such that (Y× 100)/(100-Y) = X; after the remaining (100-Y)% data is stored in DeepMapping, the data that were put aside are then inserted into DeepMapping (this corresponds to an X% data insertion).
∙ Neural Architecture Search
We also ran a set of experiments to establish the effectiveness of the proposed MHAS strategy.
§.§ Evaluation Results
§.§.§ High-level Overview of Storage vs. Latency Trade-Off
Before presenting a detailed analysis of DeepMapping, we first provide an overview of the offline storage and end-to-end latency trade-offs.
First, in order to understand the effectiveness of DeepMapping under different configurations, we create synthesized datasets by sampling
single/multiple column tables with low/high key correlations,
of different sizes (100M, 1GB, 10GB) from the TPC-H and TPC-DS benchmarks. For each configuration, we measure the compressed key-value map size and end-end latency of various queries in DeepMapping and compare them against the uncompressed baseline and an alternative baseline, Z-Standard (which, as we later see in the detailed experiments, is the best performing competitor among all the competitors considered).
As we see in
Figure <ref>,
* DeepMapping provides highest benefits for large data sets, especially when there is relatively high correlation between the key and values, and
* while DeepMapping is slower than uncompressed data for large data with low key-value correlation,
in all considered cases, DeepMapping provides a very high compression ratio, easily outperforming Z-standard in terms of storage-latency trade-off.
In Figures <ref> through <ref>,
we provide a more detailed comparisons of the offline storage and end-to-end latency trade-offs for various baseline/competitor algorithms for TPC-H and TPC-DS benchmark data and for different data scales and query workloads. These results demonstrate that the DeepMapping algorithm provides the best trade-off among all competitors for all scenarios – this is true except for an outlier case (TPC-DS item data with scale factor 1), where the data is very small (only 0.19MB); in this case, while it provides the best overall latency, DeepMapping is not able to provide a competitive compression rate as we see Figure <ref>(a).
In the figures, uncompressed data is always at the point (1.0,1.0)). Note that, generally speaking, configurations closer to (0.0,0.0) are more desirable in terms of compression ratios and latencies. The dashed arc, , indicates the points in the space with the same L2 distance to (0.0,0.0) as the DeepMapping algorithm – hence configurations outside of this arc have a less desirable compression/latency trade-off than DeepMapping; the → indicates cases where the access latency is more than 3× slower than accessing the uncompressed data
§.§.§ Offline and Online Storage
Table <ref> provides a summary of the offline storage requirements of different competitors for the TPC-H benchmark, with scale factors 1 and 10.
As we see in the table, DeepMapping generally provides the best overall compression, especially for the larger scenario (scale 10).
As aforementioned, among the baselines,
Z-Standard is the best competitor, especially
for smaller datasets, such as customer and part tables, with scale factor 1.
In Table <ref>, we see that DeepMapping consistently provides the best or second best com-
pression, especially for benchmark data with larger scale factors.
Note that in the TPC-DS benchmark, there are more columns with large cardinalities (larger than 20). A consequence of this is that the memorization is somewhat harder and consequently, TPC-DS is generally harder to compress than the TPC-H data set.
In contrast, the catalog_page (CP) and customer_demographics (CD) columns have very strong correlations with the key column and
this makes these particular maps relatively easy to compress.
Overall, these tables show that
the proposed DeepMapping approach can provide significant data size reductions,
with an average value of 70.8% on the considered data sets compared to the uncompressed key-map data size.
Figures <ref> and <ref> provide detailed breakdown of the DeepMapping storage mechanism for TPC-H and TPC-DS data sets for different scale factors. As we see in these charts, in most of the scenarios, the bulk of the storage taken by the auxiliary table, which justifies our MHAS search algorithm, which not only considers the neural network size, but the entire size of the hybrid architecture.
[5]Size of decoding map is included in auxiliary table.
§.§.§ Latency
In Tables <ref> and <ref>, we consider the end-to-end query latency for TPC-H and TPC-DS benchmarks, for different scale sizes and for different numbers of queries.
As we see in these two tables, DeepMapping consistently outperforms the competitors in terms of query latency, especially when the data size and the number of queries to be processed get larger.
Interestingly, we see that, for several of the competitors, the
need to decompress data will significantly damage the lookup performance
compared to the the uncompressed baseline.
In contrast, DeepMapping consistently outperforms the uncompressed key-value map, providing up to 3× query speedup against the uncompressed baseline.
§.§.§ Latency- vs. Memory- Optimized Validation
Note that DeepMapping needs to decompress necessary partitions of
the auxiliary quality assurance table,
however, in most cases
this does not cause significant overhead
as illustrated in Figure <ref>.
Moreover, in the default latency-optimized validation approach,
as the number of queries increases, the relative overhead of the decompression process becomes significantly smaller.
As we see in Table <ref> memory-optimized validation strategy can significantly reduce peak memory consumption, but as we see in Table <ref>, this comes with additional latency due to reduced partition re-use, rendering the latency of DeepMapping second to the Uncompressed strategy. Nevertheless, the overall latency is still competitive and is the best among all compression baselines even under memory-optimized validation.
§.§.§ Data Modifications (//)
In order to investigate the performance of DeepMapping for data manipulation operations,
we reuse the synthetic dataset that
samples key-value mappings with low/high correlation of different sizes, 100M, 1GB, 10GB from the TPC-H and TPC-DS benchmarks. We then compare the storage and latency ratios against uncompressed data and Z-standard, which is the best baseline competitor based on the results presented so far. We implement modifications as follows:
(a) For s, we use 90% of the data as the base table and add the remaining 10% data to the original dataset;
(b) For s, we randomly update 10% in the base table, with the 10% data put aside for this purpose.
(c) For s, we use 100% of the data as the base table and randomly delete 10% data.
As illustrated in Figures <ref> through <ref>, for data modification operations, DeepMapping provides the best storage vs. latency trade-off and easily outperforms the closest competitor Z-standard. While uncompressed data performs faster in certain configurations with low correlation, DeepMapping provides better compression even in those cases and therefore is the preferred option in resource constrained settings.
§.§.§ MHAS - Multi-Task Hybrid Architecture Search
In this section, we further evaluate the effectiveness of our proposed MHAS algorithm. We use the TPC-H data set, with scale factor=1, with the configuration described in Sec. <ref>.
In Figure <ref>, we plot the
compression ratio of the sampled models against the training process of the controller.
As we see here,
at the very beginning of the search stage, there is a "flat" region where the compression ratio is not yet decreasing – this is because, at the early stages, the sampled models are not yet capable to memorize the data.
In fact, at this stage, the size of the data structure may be larger than the original data since a lot of the memorization work is left to the auxiliary table.
As the controller training proceeds, however, the sampled models are quickly getting better at memorizing the data and the compression ratio improves significantly.
Finally, Figure <ref> illustrates how the compression ratio vs. latency trade-off evolves during the search. In the figure, each dot corresponds to the performance of a sampled architecture and different colors correspond to the different stages of the search. As we see in this figure, initially, samples may cover a large range indicating that the model search has not stabilized; as the search progresses, however, the samples start clustering in an increasingly shrinking region of the trade-off space, illustrating the effectiveness of the MHAS strategy proposed in this paper.
§ CONCLUSIONS AND FUTURE WORKS
In this paper, we presented DeepMapping, which utilizes a hybrid architecture. It uses a lightweight auxiliary data structure to augment neural network models for better exploiting the memorization, compressibility, and GPU acceleration opportunities brought by deep learning without any accuracy loss.
It also includes a novel multi-task hybrid architecture search strategy, MHAS. While existing AutoML strategies pursue high accuracy or small size of the model, MHAS targets minimizing the overall compression ratio of the entire hybrid architecture. MHAS automatically balances the numbers of shared and private layers (for each attribute) in the search space.
Our experimental results, reported in the previous sections, show that DeepMapping strikes significantly better trade-offs among query latency and storage efficiency for categorical and integer data, compared to state-of-art lossless compression techniques.
The proposed DeepMapping abstraction can readily facilitate various database problems that rely on the key-value data, such as hash maps that are used in hash-based join and aggregation, and key-value stores, in resource constrained settings, such as edge computing and in-memory computing.
Future work will include exploring the approach for scenarios that involve continuous numerical attributes
and more complex
multi-table queries.
* neural networks can be utilized to compress the data and the correlation between the key columns and values columns can be leveraged to further compress the data during the training process, which is also called the memorization process.
* when querying data using our approaches, the neural network can be seen as serving the data storage and indexing without additional storage overhead.
* our approaches achieve the best trade-off among compression ratio and end-end latency in the most of tables we evaluated from TPC-H, and TPC-DS, with scale factors as 1 and 10.
* by studying how data manipulation will affect the compression ratio and the end-end query performance, we apply our approaches, Z-Standard, on three types of representative tables, single-column-low-correlation, multiple-column-low-correlation, multiple-column-high-correlation, with different scales, 100MB, 1GB, 10GB, our approaches significantly outperformed the Z-Standard in both compression ratio and query performance.
* we also observed that there is a turning point that when the size gets increased, the decompression of the auxiliary data structure will become a bottleneck to the end-end latency. How to automatically partition the data to improve the performance can be done as a future work
* another bottleneck: if there is a table with multiple large cardinality columns, it will make the memorization process a little bit harder while reducing the capability to further compress the data.
* limitation: is not able to compress the data with continuous values, and text.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04108v1 | 20230709063120 | Asynchronous Proportional Response Dynamics in Markets with Adversarial Scheduling | [
"Yoav Kolumbus",
"Menahem Levy",
"Noam Nisan"
] | cs.GT | [
"cs.GT",
"cs.MA",
"econ.TH",
"math.DS"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We study Proportional Response Dynamics (PRD) in linear Fisher markets where participants act asynchronously. We model this scenario as a sequential process in which in every step, an adversary selects a subset of the players that will update their bids, subject to liveness constraints. We show that if every bidder individually uses the PRD update rule whenever they are included in the group of bidders selected by the adversary, then (in the generic case) the entire dynamic converges to a competitive equilibrium of the market. Our proof technique uncovers further properties of linear Fisher markets, such as the uniqueness of the equilibrium for generic parameters and the convergence of associated best-response dynamics and no-swap regret dynamics under certain conditions.
§ INTRODUCTION
A central notion in the study of markets is the equilibrium: a state of affairs where no single party wishes to unilaterally deviate from it. The main benefit of focusing on the notion of equilibria is in what it ignores: how the market can reach an equilibrium (if at all). This latter question is obviously of much interest as well, especially if you wish to consider computational aspects,[As we know that finding an equilibrium may be computationally intractable in general.] and a significant amount of research has been devoted to studying “market dynamics” and their possible convergence to an equilibrium. Almost all works that study market dynamics consider synchronous dynamics.
Synchronous Dynamics:
Every time step t, all participants update, simultaneously, their behavior based on the state
at time t-1.
Such synchronization is clearly difficult to achieve in real markets, and so one might naturally wonder to what extent is full synchrony needed or whether convergence of market dynamics occurs even asynchronously. There are various possible levels of generality of asynchrony to consider. The simplest model considers a sequential scenario where at every time step t, an adversary chooses a single participant, and only this participant updates their behavior based on the state at time t-1. The adversary is limited to adhere to some liveness condition, such as scheduling every participant infinitely often or at least once every T steps. In the most general model <cit.>, the adversary may also delay messages, causing players to reply to dated information. In this paper, we focus on an intermediate level of allowed asynchrony, where updates may happen in an arbitrary asynchronous manner, but message delays are always smaller than the granularity of activation.
Activation Asynchrony:[In <cit.> this was termed “simultaneous.”] Every time step t, an arbitrary subset of participants is chosen by an adversary and all of these participants update their behavior based on the state at time t-1. The adversary must adhere to the liveness condition where for every participant some set that includes him must be chosen at least once every T consecutive steps.
The market dynamics that we study in this paper are linear Fisher markets with proportional response dynamics (PRD), a model that has received much previous attention <cit.> and for which synchronous convergence to equilibrium is known.
While there are a few asynchronous convergence results known for other dynamics, specifically for tatonnement dynamics <cit.>, there are no such results known for proportional response dynamics, and achieving such results has been mentioned as an open problem in <cit.>.
Fisher Market with Linear Utilities:
There are n players and m goods. Each player i has a budget B_i and each good j has, w.l.o.g., a total quantity of 1. Buyer i's utility from getting an allocation
x_i=(x_i1,...,x_im)
is given by u_i(x_i) = ∑_j a_ij x_ij,
where the parameters a_ij≥ 0
are part of the definition of the market. A market equilibrium is an allocation
X = (x_ij) (where 0 ≤ x_ij≤ 1) and a pricing
p = (p_j) with the following properties.
(1) Market clearing: for every good j it holds that
∑_i x_ij = 1;
(2) Budget feasibility: for every player i it holds that ∑_j x_ij p_j ≤ B_i; and
(3) Utility maximization: for every player i and every alternative allocation
y = (y_1,...,y_m) with ∑_j y_j p_j ≤ B_i we have that u_i(x_i) ≥ u_i(y).
Proportional Response Dynamics:
At each time step t, each player i will make a bid b^t_ij≥ 0 for every good j, where ∑_j b^t_ij = B_i. In the first step, the bid is arbitrary. Once bids for time t are announced, we calculate p^t_j = ∑_i b^t_ij and allocate the items proportionally: x^t_ij = b^t_ij/p^t_j, providing each player i with utility u^t_i = ∑_j a_ijx^t_ij. At this point, player i updates his bids for the next step by bidding on each item proportionally to the utility he obtained from the item: b^t+1_ij = B_i · a_ijx^t_ij / u^t_i.
From the perspective of the player, proportional response updates can be thought of as a simple parameter-free online learning heuristic, with some similarity to regret-matching <cit.> in its proportional updates, but considers the utilities directly, rather than the more sophisticated regret vector loss.
It is not difficult to see that a fixed point of this proportional response dynamic is indeed an equilibrium of the Fisher market. Significantly, it was shown by <cit.>
that this dynamic does converge,
in the synchronous model, to an equilibrium. As mentioned, the question of asynchronous convergence
was left open.
We provide the first analysis of proportional response dynamics in the asynchronous setting, and provide a positive answer to this open question in our “intermdiate” level of asynchrony.
For generic linear Fisher markets, proportional response dynamics with adversarial activation asynchrony, where each player is activated at least once every T steps, converge to the unique market equilibrium.
“Generic” means except for measure zero of possible (a_ij)'s,
and the uniqueness of the
market equilibrium is due to this genericity. We do not know whether the genericity condition is required for
asnychronous convergence and we leave this as a minor open problem. We did not analyze the rate of convergence to equilibrium;
we leave such analysis as a second open problem.
Our main open problem, however, is the generalization to full asynchrony.
Open Problem: Does such convergence occur also in the full asynchronous model where the adversary may introduce arbitrary message delays?
Our techniques rely on considering an associated game
obtained by using “modified” utility functions for
each of the players: ũ_̃ĩ(b)=∑_jb_ijln(a_ij) + ∑_jp_j(1 - ln(p_j)). We show that a competitive market equilibrium (with the original utility
functions) corresponds to a Nash equilibrium in the associated game.[It is
worthwhile to emphasize, though, that a competitive market equilibrium is not
a Nash equilibrium in the original market since
the players are price takers rather than fully rational. See Section <ref>.]
These modified utility functions are an adaptation to an individual utility
of a
function Φ(b)=∑_ijb_ijln(a_ij) + ∑_jp_j(1 - ln(p_j))
that was proposed in <cit.> as an objective for a convex program for equilibrium computation.[Notice that Φ is not the sum of the ũ_̃ĩ's, as
the second term appears only once.] This function was first linked with proportional
response dynamics in <cit.> where it was proven
that synchronous proportional
response dynamics act as mirror descent on this function.
The following three sets of bid profiles are identical: (1) the set of pure strategy Nash equilibria of the associated game; (2) the set of market equilibria of the Fisher market; and (3) the maximizing set of the potential function Φ.
The technical core of our proof is to show that not only does a synchronized
proportional response step by all the players increase the potential
function but, in
fact, every proportional response step by any subset of the players increases this potential function.
The point of view of market equilibria as Nash equilibria of the associated game offers several other advantages, e.g., suggesting several other dynamics that are related to proportional response that can be of interest. For example, we show that letting players best-respond in the game corresponds to the limit of a sequence of proportional response steps by a single player, but can be implemented as a single step of such a best-response in the game, which can be computed efficiently by the players and
may converge faster to the market equilibrium. Another possibility is using some (internal) regret minimization dynamics (for the game), which would also
converge to equilibrium in the generic case since, applying <cit.>,
it is the unique Correlated Equilibrium as well.
The structure of the rest of the paper is as follows. In Section <ref> we provide further formal details and notations that will be useful for our analysis. In Section <ref> we present the associated game and its relation to the competitive equilibria of the market. In Section <ref> we study best response dynamics in the associated game and their relation to PRD. In Section <ref> we show a key lemma regarding the potential function of the associated game under bid updates by subsets of the players, and then, in Section <ref> we show the uniqueness of the market equilibrium for generic markets and complete our proof of convergence for asynchronous PRD. In Section <ref> we provide simulation results that compare the convergence of proportional response dynamics with best response dynamics in the associated game in terms of the actual economic parameters in the market (namely, the social welfare and the convergence of the bid profiles). Finally, in Section <ref> we conclude and discuss limitations of our technique and open questions.
All proofs in this paper are deferred to the appendix.
§.§ Further Related Work
Proportional response dynamics (PRD) were originally studied in the context of bandwidth allocation in file-sharing systems, where it was proven to converge to equilibrium, albeit only for a restrictive setting <cit.>.
Since then, PRD has been studied in a variety of other contexts, including Fisher markets, linear exchange economies, and production markets. See <cit.> for further references.
In Fisher markets, synchronous PRD has been shown to converge to market equilibrium for Constant Elasticity of Substitution (CES) utilities in the substitutes regime <cit.>.
For the linear Fisher setting, synchronous PRD was explained as mirror descent <cit.> on a convex program, previously discovered while developing an algorithm to compute the market equilibrium <cit.>,
and later proven to be equivalent to the famous Eisenberg-Gale program <cit.>.
By advancing the approach of <cit.>, synchronous PRD with mild modifications was proven to converge to a market equilibrium for CES utilities in the complements regime as well <cit.>.
In linear exchange economies, synchronous PRD has been shown to converge to equilibrium in the space of utilities and allocations while prices do not converge and may cycle, whereas for a damped version of PRD, also the prices converge <cit.>.
In production markets, synchronous PRD has been shown to increase both growth and inequalities in the market <cit.>. PRD was also shown to converge with quasi-linear utilities in <cit.> and shown to stay close to market equilibrium for markets with parameters varying over time <cit.>.
All the above works consider simultaneous updates by all the players, and the question of analyzing asynchronous dynamics and whether they converge was raised by several authors as an open problem <cit.>.
Asynchronous dynamics in markets have been studied in several recent works. However, these works consider different models and dynamics from ours, and to our knowledge, our work presents the first analysis of asynchronous proportional response bidding dynamics. In <cit.>, it is shown that tatonnement dynamics under the activation asynchrony model converge to equilibrium, with results for settings both with and without carryover effects between time units. A later work, <cit.>, showed that tatonnement price dynamics converge to a market equilibrium under a model of sequential activation where in every step a single agent is activated, and where additionally, the information available to the activated seller about the current demand may be inaccurate within some interval that depends on the range of past demands. A different approach taken in <cit.> assumes that each seller has a set of rules affected by the other players' actions and governing its price updates; it is shown that the dynamics in which sellers update the prices based on such rules converge to a unique equilibrium of prices in the activation asynchrony model.
Classic results regarding the computation of competitive equilibria in markets mostly consider centralized computation and vary from combinatorial approaches using flow networks <cit.>, interior point <cit.>, and ellipsoid <cit.> methods, and many more <cit.>. Eisenberg and Gale devised a convex program which captures competitive equilibria of the Fisher model as its solution <cit.>. Notable also is the tatonnement process of price convergence in markets dated back to Walras <cit.> and studied extensively from Arrow <cit.> and in later works.
More broadly, in the game theoretic literature, our study is related to a long line of work on learning in games, starting from seminal works in the 1950s <cit.>, and continuing to be an active field of theoretical research <cit.>, also covering a wide range of classic economic settings including competition in markets <cit.>, bilateral trade <cit.>, and auctions <cit.>, as well as applications such as blockchain fee markets <cit.> and strategic queuing systems <cit.>. For a broad introduction to the field of learning in games, see <cit.>. The vast majority of this literature studies repeated games under the synchronous dynamics model. Notable examples of analyses of games with asynchronous dynamics are <cit.>, which study best response dynamics with sequential activation, and <cit.>, which explore best response dynamics in a full asynchrony setting which includes also information delays, and show that in a class of games called max-solvable, convergence of best response dynamics is guaranteed. Our analysis of best response dynamics in Section <ref> takes a different route, and does not conclude whether the associated game that we study is max-solvable or not; such an analysis seems to require new ideas.
Our work is also related to a large literature on asynchronous distributed algorithms.
We refer to a survey on this literature <cit.>.
The liveness constraint that we conisder in the dynamics[Intuitively, if one allows some of the parameters in the dynamic not to update, these parameters become irrelevant, as they will remain frozen, and thus one cannot hope to see any convergence of the entire system.] is related to those, e.g., in <cit.>.
Recent works that are conceptually more closely related are <cit.>, which propose asynchronous distributed algorithms for computing Nash equilibria in network games. Notably, <cit.> propose an algorithm that converges to an equilibrium in a large class of games in asynchronous settings with information delays. Their approach, however, does not capture proportional response dynamics and does not apply to our case of linear Fisher markets.
§ MODEL AND PRELIMINARIES
The Fisher market: We consider the classic Fisher model of a networked market in which there is a set of buyers ℬ and a set of divisible goods 𝒢. We denote the number of buyers and number of goods as n = |ℬ|, m = |𝒢|, respectively, and index buyers with i and goods with j. Buyers are assigned budgets B_i∈ℝ^+ and have some value[
For the ease of exposition, our proofs use w.l.o.g. a_ij > 0. This is since in all cases where a_ij = 0 might have any implication on the proof, such as ln(a_ij), these expressions are multiplied by zero in our dynamics.]
a_ij≥ 0 for each good j.
Buyers' valuations are normalized such that ∑_j a_ij = 1.
It is convenient to write the budgets as a vector B=(B_i) and the valuations as a matrix A_n × m=(a_ij), such that A,B are the parameters defining the market.
We denote the allocation of goods to buyers as a matrix X = (x_ij) where x_ij≥ 0 is the (fractional) amount of good j that buyer i obtained.
We assume w.l.o.g. (by proper normalization) that there is a unit quantity of each good. The price of good j (which is depends on the players' actions in the market, as explained below) is denoted by p_j≥ 0 and prices are listed as a vector p=(p_j). Buyers have a linear utility function u_i(x_i)=∑_j a_ijx_ij with the budget constraint ∑_j x_ij p_j ≤ B_i. We assume w.l.o.g. that the economy is normalized, i.e., ∑_i B_i = ∑_j p_j = 1.
Market equilibrium: The competitive equilibrium (or “market equilibrium”) is defined in terms of allocations and prices as follows.
(Market Equilibrium): A pair of allocations and prices (X^*,p^*) is said to be market equilibrium if the following properties hold:
* Market clearing: ∀ j, ∑_i x_ij^* = 1,
* Budget feasibility: ∀ i, ∑_j x_ij^* p_j^*≤ B_i,
* Utility maximization: ∀ i, x_i^* ∈max_x_i u_i(x_i).
In other words, under equilibrium prices all the goods are allocated, all budgets are used, and no player has an incentive to change their bids given that the prices remain fixed.
Notice that this notion of equilibrium is different from a Nash equilibrium of the game where the buyers select their bids strategically, since in the former case, players do not consider the direct effect of possible deviation in their bids on the prices. We discuss this further in Section <ref>.
For linear Fisher markets, it is well established that competitive equilibrium utilities u^* and prices p^* are unique, equilibrium allocations are known to form a convex set, and the following conditions are satisfied.
∀ i,j a_ij/p^*_j≤u_i^*/B_i and x_ij > 0 a_ij/p^*_j = u_i^*/B_i.
This is a detailed characterization of the equilibrium allocation: every buyer gets a bundle of goods in which all goods maximize the value per unit of money. The quantity a_ij/p^*_j is informally known as “bang-per-buck” (ch. 5 & 6 in <cit.>), the marginal profit from adding a small investment in good j.
Market equilibrium bids are also known to maximize the Nash social welfare function (see <cit.>) NSW(X)=∏_i∈ℬ u_i(x_i)^B_i and to be Pareto efficient, i.e., no buyer can improve their utility without making any one else worse off (as stated in the first welfare theorem).
The trading post mechanism and the market game (Shapley-Shubik):
First described in <cit.> and studied under different names <cit.>, the trading post mechanism is an allocation and pricing mechanism which attempts to capture how a price is modified by demand. Buyers place bids on goods, where buyer i places bid b_ij on good j. Then, the mechanism computes the good's price as the total amount spent on that good and allocates the good proportionally to the bids, i.e., for bids b:
p_j = ∑_i=1^n b_ij x_ij =
b_ij/p_j b_ij > 0
0 otherwise
Note that the trading post mechanism guarantees market clearing for every bid profile b in which all goods have at least one buyer who is interested in buying. The feasible bid set of a buyer under the budget constraint is S_i={b_i∈ℝ^m | ∀ j b_ij≥ 0 ∑_j b_ij=B_i}, i.e., a scaled simplex. Denote S=∏_i∈ℬ S_i and S_-i=∏_k∈ℬ∖{i} S_k.
Considering the buyers as strategic, one can define the market game as G={ℬ,(S_i)_i∈ℬ,(u_i)_i∈ℬ} where the utility functions can be written explicitly as u_i(b)=u_i(x_i(b))=∑_j=1^ma_ijb_ij/p_j.
We sometimes use the notation u_i(b_i, b_-i), where b_i is the bid vector of player i and b_-i denotes the bids of the other players.
Potential function and Nash equilibrium: For completeness, we add the following definitions.
Potential function: A function Φ is an exact potential function<cit.> if ∀ i∈ℬ,∀ b_-i∈ S_-i and ∀ b_i,b_i'∈ S_i we have that Φ(b_i',b_-i) - Φ(b_i,b_-i) = u_i(b_i',b_-i) - u_i(b_i,b_-i), with u_i being i's utility function in the game.
Best response: b_i^* is a best response to b_-i if ∀ b_i∈ S_i u_i(b^*_i,b_-i)≥ u_i(b_i,b_-i). That is, no other response of i can yield a higher utility.
Nash equilibrium: b^* is Nash equilibrium if ∀ i b^*_i is a best response to b^*_-i (no player is incentivized to change their strategy).
Proportional response dynamics:
As explained in the introduction, the proportional response dynamic is specified by an initial bid profile b^0, with b_ij^0 > 0 whenever a_ij > 0, and the following update rule for every player that is activated by the adversary: b^t+1_ij = a_ijx^t_ij/u_i(x^t_i) B_i. See Section <ref> for further details on activation of subsets of the players.
§ THE ASSOCIATED GAME
The Fisher market can be naturally thought of as a game in which every one of the n players aims to optimize their individual utility u_i(b_i, b_-i), as defined in Section <ref>. However, it is known that the set of Nash equilibria of this game does not coincide with the set of market equilibria <cit.>, and so a solution to this game (if indeed the players reach a Nash equilibrium) is economically inefficient <cit.>.
A natural question that arises is whether there is some other objective for an individual player that when maximized by all the players, yields the market equilibrium. We answer positively to this question and show that there is a family of utility functions such that in the “associated games” with these utilities for the players, the set of Nash equilibria is identical to the set of market equilibria of the original game
(for further details, see also the appendix).
However, the fact that a Nash equilibrium of an associated game is a market equilibrium still does not guarantee that the players' dynamics will indeed reach this equilibrium.
A key element in our proof technique is that we identify, among this family of associated games, a single game, defined by the “associated utility” ũ_̃ĩ(b)=∑_jb_ijln(a_ij) + ∑_jp_j(1 - ln(p_j)), which admits an exact potential. We then use a relation which we show between this game and the proportional response update rule to prove the convergence of our dynamics (Theorem <ref>).
(The Associated Game):
Let G be a market game. Define the associated utility of a player i as ũ_̃ĩ(b)=∑_jb_ijln(a_ij) + ∑_jp_j(1 - ln(p_j)).
The associated game G̃ is the game with the associated utilities for the players and the same parameters as in G.
G is constructed such that the function Φ is it's potential. Note that although having similar structure, u_i and Φ differ via summation on i only in the first term (Φ is not the sum of the players' utilities).
For every Fisher market, the associated game G̃ admits an exact potential function that is given by[Since we discuss the players' associated utilities, we consider maximization of this potential. Of course, if the reader feels more comfortable with minimizing the potential, one can think of the negative function.]
Φ(b)=∑_ijb_ijln(a_ij) + ∑_jp_j(1 - ln(p_j)).
Once having the potential function defined, the proof is straightforward: the derivatives of the utilities u_i and the potential Φ with respect to b_i are equal for all i. Theorem <ref>, formally restated below, connects between the associated game, the market equilibria and the potential.
(Restatement of Theorem <ref>).
The following three sets of bid profiles are equal. (1) The set of pure-strategy Nash equilibria of the associated game: NE(G̃) = {b^*| ∀ b∈ S ũ_̃ĩ(b^*)≥ũ_̃ĩ(b)}; (2) the set of market equilibrium bid profiles of the Fisher market: {b^*| (x(b^*),p(b^*)) satisfy Def. <ref>}; and (3) the maximizing set of the potential from Theorem <ref>: max_b∈ SΦ(b).
The proof uses a different associated game G' that has simpler structure than G̃, but does not have an exact potential, and shows that: (i) Nash equilibria of G' identify with the market equilibria; (ii) all the best responses of players i to bid profiles b_-i in G' identify with those of G̃; and (iii) every equilibrium of G̃ maximizes the potential Φ (immediate by the definition of potential).
§ BEST RESPONSE DYNAMICS
In this section we explore another property of the associated game: we show that if instead of using the proportional response update rule, each player myopically plays their best response to the last bid profile with respect to their associated utility, then the entire asynchronous sequence of bids converges to a market equilibrium, as stated in the following theorem. We then show that there is a close relation between best response and proportional response dynamics.
For generic linear Fisher markets in a sequential asynchrony model where in every step a single player is activated, best response dynamics converge to the Market Equilibrium. For non-generic markets the prices are guaranteed to converge to the equilibrium prices.
The idea of the proof is to show that the best-response functions are single valued
(∀ i,b_-i u_i(·, b_-i) has a unique maximizer) and continuous (using the structure of best-response bids). Together with the existence of the potential function Φ it holds that the analysis of <cit.> applies for these dynamics and thus convergence is guaranteed.
One of the appealing points about proportional response dynamics is their simplicity — in each update, a player observes the obtained utilities and can easily compute the next set of bids. We show that also the best response of a player can be computed efficiently by reducing the calculation to a search over a small part of the subsets of all goods which can be solved by a simple iterative process.
For every player i and any fixed bid profile b_-i for the other players, the best response of i is unique and can be computed in 𝒪(mlog(m)) time.
Roughly, best responses are characterized uniquely by a one-dimensional variable c^*. For every subset of goods s we define a variable c_s and prove that c^* is the maximum amongst all c_s. So finding c^* is equivalent to searching a specific subset with maximal c_s. The optimal subset of goods admits a certain property that allows to narrow-down the search domain from all subsets to only m subsets.
The relation between the best response and proportional response updates can intuitively be thought of as follows. While in PRD players split their budget between all the goods according the utility that each good yields, and so gradually shift more budget to the more profitable subset of goods,
best response bids of player i with respect to ũ_i can be understood as spending the entire budget on a subset of goods which, after bidding so (considering the effect of bids on prices), will jointly have the maximum bang-per-buck (in our notation a_ij/p_j) amongst all subsets of goods,
given the bids b_-i^t of the other players.
Those bids can be regarded as “water-filling” bids as they level the bang-per-buck amongst all goods purchased by player i (for more information see the appendix).
It turns out that there is a clear formal connection between the best response of a player in the associated game and the proportional response update rule in the true game: the best response bids are the limit point of an infinite sequence of proportional response updates by the same player, as expressed in the following proposition.
Fix any player i and fix any bid profile b_-i for the other players. Let b_i^* = argmax_b_i ∈ S_iũ_i(b_i, b_-i) and let (b_i^t)_t=1^∞ be a sequence of consecutive proportional response steps applied by player i, where b_-i is held fixed at all times t. Then lim_t →∞ b_i^t = b_i^*.
§ SIMULTANEOUS PLAY BY SUBSETS OF AGENTS
In this section, we shift our focus back to proportional response dynamics under the activation asynchrony model in which the adversary can choose in every step any subset of players to update their bids. Towards proving that proportional response dynamics converges to a market equilibrium in this setting, we utilize the associated game and potential function presented in Section <ref> to show that any activated subset of players performing a PRD step will increase the potential.
Formally, let v⊆ℬ be a subset of players activated by the adversary and let f_v(b) be a function that applies proportional response to members of v and acts as the identity function for all the other players. The update for time t+1 when the adversary activates a subset of the players v^t ⊆ℬ is therefore:
b_ij^t+1=(f_v^t(b^t))_ij =
a_ijx_ij^t/u_i^tB_i if i ∈ v^t
b_ij^t otherwise.
For all v ⊆ℬ and for all b∈ S it holds that Φ(f_v(b)) > Φ(b), unless f_v(b)=b.
The proof shows that for any subset v^t, a PRD step b^t+1 is the solution to some maximization problem of a function g^t(b) different from Φ, such that Φ(b^t+1)>g^t(b^t+1)≥ g^t(b^t)=Φ(b^t).
Notable to mention is the sequential case where all subsets are singletons, i.e., for all t, v^t={i^t} for some i^t ∈ℬ.
In that case, the above result yields that the best-response bids can be expressed as the solution to an optimization problem over the bids b on a function that is monotone in the KL divergence between the prices induced by b and the current prices,
whereas PRD is the solution to an optimization problem on a similar function, but one that depends on the KL divergence between the bids b and the current bids. Thus, sequential PRD can be regarded as a relaxation of best response; on the one hand, it is somewhat simpler to compute a step, and on the other hand, it takes more steps to reach the best response (see Proposition <ref> and the simulations in Section <ref>).
§ GENERIC MARKETS
Here we show that in the generic case, linear Fisher markets have a unique equilibrium bid profile.
While it is well known that in linear Fisher markets equilibrium prices and utilities are unique, and the equilibrium bids and allocations form convex sets (see
section <ref>), we show that multiplicity of equilibrium bid profiles can result only from a special degeneracy in the game parameters that has measure zero in the parameter space. In other words, if the game parameters are not carefully tailored to satisfy a special equation (formally described below), or, equivalently, if the parameters are slightly perturbed, the market will have a unique equilibrium. Similar property was known for the linear exchange market<cit.> and we bring a simple and concise proof for the Fisher model.
A Fisher market is called generic if the non-zero valuations of the buyers (a_ij) do not admit any multiplicative equality. That is, for any distinct and non empty K, K'⊆ℬ×𝒢 it holds that ∏_(i,j)∈ Ka_ij≠∏_(i',j')∈ K'a_i'j'.
Any generic linear fisher market has a unique market equilibrium bid profile b^*.
Before discussing the proof of Theorem <ref>, we present the following corollary.
In generic linear Fisher markets, no-swap regret dynamics in the associated game converge to the market equilibrium.
This follows from <cit.> which states that in games with convex strategy sets and continuously differentiable potential function Φ, as in our case, the set of correlated equilibria are mixtures of elements in max_b Φ. Theorem <ref> yields that max_b Φ = {b^*} is the unique market equilibrium in the generic case, and so for a unique correlated equilibrium we have a that no-swap regret guarantees convergence.
To prove Theorem <ref>, we use the representation of the bids in the market as a bipartite graph of players and goods Γ(b)={ V,E} with V=ℬ∪𝒢 and E={(i,j) | b_ij > 0}. The proof shows that if a market has more than one equilibrium bid profile, then there has to be an equilibrium b with Γ(b) containing a cycle, whereas the following lemma forbids this for generic markets.
If b^* are equilibrium bids in a generic linear Fisher market, then Γ(b^*) has no cycles.
A key observation for proving this lemma is that at a market equilibrium, a_ij/p^*_j is constant amongst goods purchased, and so it is possible to trace a cycle and have all the p^*_j cancel out and obtain an equation contradicting the genericity condition.
An observation that arises from Lemma <ref> is that when the number of buyers in the market is of the same order of magnitude as the number of goods or larger, then in equilibrium most buyers will only buy a small number of goods. Since there are no cycles in Γ(b^*) and there are n+m vertices, there are at most n+m-1 edges. Thus, with n buyers, the average degree of a buyer is 1 + m-1/n.
Proof idea of Theorem <ref>:
With Theorem <ref> and the construction from the previous sections under our belts (namely, the associated game, Theorems <ref>, <ref> about its potential and equilibria, and Lemma <ref> about updates by several players simultaneously), we are now ready to complete the proof of Theorem <ref> on the convergence of asynchronous proportional response dynamics.
The idea is that we now know that PRD steps by subsets of players increase the potential, and so the bids should somehow converge to reach the maximum potential, which is obtained at the unique market equilibrium.
Technically, since the sequence of bids b^t is bounded, it must have condensation points. The proof then proceeds by way of contradiction. If the sequence does not converge to the equilibrium bid profile b^*, then there is some subsequence that converges to a different bid profile b^**, which by Theorem <ref>, must have lower potential than b^* (since it is not a market equilibrium). The main idea is to show that if players are not “starved” in the dynamic, i.e., if the maximum time interval between consecutive updates of a player is bounded by some constant T, then the dynamic must reach a point where the bids are sufficiently close to b^** such that there must be some future update by some subset of the players under which the potential increases to more than Φ(b^**), thus contradicting the existence of condensation points other than the market equilibrium.
To show this, the proof requires several additional arguments on the continuity of compositions of PRD update functions that arise under adversarial scheduling, and the impact of such compositions on the potential function. The full proof is found in the appendix.
§ SIMULATIONS
Next, we look at simulations of the dynamics that we study and compare the convergence of proportional response dynamics to best response dynamics in the associated game, as discussed in Section <ref>.
The metrics we focus on here for every dynamic are the Nash social welfare, which, as mentioned in Section <ref>, is maximized at the market equilibrium, and the Euclidean distance between the bids at time t and the equilibrium bids. Additionally, we look at the progression over time of the value of the potential Φ(b^t) (for the definition, see Section <ref>).
Figure <ref>, presents simulations of an ensemble of markets, each with ten buyers and ten goods, where the parameters in each market (defined in the matrices A,B) are sampled uniformly (and so the genericity condition <ref> holds with probability one) and normalized as explained in Section <ref>. For each market, the parameters remain fixed throughout the dynamic. The initial condition in all simulation runs is the uniform distribution of bids over items, and the schedule is sequential, such that a single player updates its bids in every time step.
Figure <ref> (main figure) shows our metrics, averaged over a sample of 300 such simulations. The insets show the plots of a sample of 50 individual simulations (without averaging) over a longer time period.
Figure <ref> show similar plots for best response dynamics.
As could be expected in light of our analysis in Section <ref> — best response dynamics converge faster than PRD, as seen in the different time scales on the horizontal axes.
A close look at the individual bid dynamics depicted in the insets shows a qualitative difference between the two types of dynamics: in PRD the bids in each dynamic smoothly approach the equilibrium profile, whereas best response bid dynamics are more irregular.
Additionally, the collection of curves for the individual simulations shows that under uniformly distributed market parameters, in both dynamics there is variance in convergence times, with a skewed distribution such that in most markets the dynamics converge quickly, but there is a distribution tail of slower-converging dynamics.
§ CONCLUSION
We have shown that proportional response bid dynamics converge to a market equilibrium in a setting where the schedule of bid updates can be chosen adversarially, allowing for sequential or simultaneous updates for any subset of players. We proposed a novel approach to address this problem by identifying a family of associated games related to proportional response dynamics, showing their relation to the competitive equilibria of the market, and leveraging these relations to prove convergence of the dynamics.
En route, we showed that other types of dynamics, such as myopic best response and no-swap regret, also converge in the associated game. Additionally, we note that our result on the uniqueness of market equilibria in the generic case (e.g., if the market parameters have some element of randomness) may also be of interest for future research on the Fisher market setting.
One main open question that we did not analyze is whether proportional response dynamics converge under the full asynchrony model, which includes information delays. The analysis of this model raises several complications, as it creates further coupling between past and current bid profiles. We conjecture that if information delays are bounded, then convergence also occurs in this model. However, it is not clear whether our approach could be extended to argue that proportional response updates by subsets of players with respect to delayed information increase the potential in our associated game, or whether proving convergence in this setting will require new methods.
One limitation of our analysis is that we provide a guarantee that under any bid update by any subset of players chosen by an adversary, the potential function of the associated game increases, but our technique does not specify by how much the potential increases in every step, and therefore, we cannot provide speed of convergence results. Such analysis seems to require new techniques, and we see this as an interesting problem for further work.
plain
§ APPENDICES
In the following sections we provide the proofs for the results presented in the main text as well as further technical details and explanations.
*definition*Definition
Notation: In the following, we use ∇_b_if to denote the gradient of a function f with respect to the bids b_i of player i only, ∂_b_ij f to denote the partial derivative by the bid of player i on good j and ∂_b_ij^2 f to denote the second derivative. We denote by θ_i = ∑_k ≠ i b_i the `pre-prices' which are the prices excluding the bids b_i (and so for every player i and every bid profile, p=θ_i + b_i). In some of the proofs, we use for a function f the abbreviated notation (f)^+ = max(f,0). All other notations are as defined in the main text.
§ THE ASSOCIATED GAME
(Theorem <ref>):
A sufficient condition for Φ being an exact potential<cit.> is
∀ i ∇_b_iΦ(b_i, b_-i)= ∇_b_i(b_i,b_i).
And indeed, in our case we have:
∂ b_ijΦ(b_i, b_-i) = ln(a_ij) - ln(p_j)=ln(a_ij/p_j),
∂ b_iju_i(b_i,b_i) = ln(a_ij) - ln(p_j)=ln(a_ij/p_j).
In order to prove Theorem <ref>,
we first define a different associated game denoted G' that differs from G only in having a different associated utility function u_i'=∑_j a_ijln(p_j).
In fact, G̃ and G' a part of a family
of associated games of the market game G,
which have the property that they all share the same best responses to bid profiles (and therefore, also the same Nash equilibria) and for all these games, the function Φ is a best-response potential (see <cit.> for the definition of best-response potential games). Among this family of games, we are particularly interested in the games G̃ and G', since the first admits Φ as an exact potential, and the latter has a particularly simple derivative for its utility , which has a clear economic interpretation:
∂_b_ij(b) = a_ij/p_j is simply the bang-per-buck of player i from good j (see the model section in the main text).
Next, we present several technical lemmas that will assist us in proving Theorem 2 and which will also be useful in our proofs later on.
For any player i and fixed b_-i, both (b_i,b_-i) and (b_i,b_-i) are strictly concave in b_i.
We will show the proof for .
We compute the Hessian and show that it is negative definite.
The diagonal elements are
∂^2_b_ij(b_i,b_i) = -1/p_j,
and all of the off-diagonal elements are
∂_b_ik∂_b_ij(b_i,b_i) = 0.
Therefore, the Hessian is a diagonal matrix with all of its elements being negative, and thus, is strictly concave. The same argument works for as well.
Fix a player i and any bid profile b_-i∈ S_-i of the other players, then the following two facts hold.
* b_i^'* = max_b_i ∈ S_i(b_i, b_-i) if and only if it holds that
∀ b_i' ∈ S_i ∑_j a_ij/p_j^'* b_ij^'*≥∑_j a_ij/p_j^'* b_ij', where p_j^'*=θ_ij +b^'*_ij.
* b̃_i^* = max_b_i ∈ S_i(b_i, b_-i) if and only if it holds that ∀b̃_̃ĩ∈ S_i ∑_j ln(a_ij/p̃_j^*) b̃_ij^* ≥∑_j ln(a_ij/p̃_j^*) b̃_ij, where p̃_j^*=θ_ij +b̃^*_ij.
We will show the proof for (1), and the proof for (2) is similar.
Let b_i^* be a best response to b_-i and let b'_i be some other strategy. Consider the restriction of (b_i) to the line segment [b_i', b_i^*] as follows; define f(ξ)=(b_i(ξ)) for b_i(ξ)=b_i' + ξ (b_i^* - b_i') where ξ∈ [0,1]. As is strictly concave and b_i^* is the unique maximizer of , it holds that f is strictly concave and monotone increasing in ξ. Therefore the derivative of f must satisfy at the maximum point ξ = 1 that ξ f(1) ≥ 0. This is explicitly given by
ξ f(ξ) = ∇_b_i(b_i(ξ))(b_i^* - b_i').
Therefore, when deriving and substituting ξ=1 we get b_i(1)=b_i^*, and
0 ≤ξ f(1) = ∇_b_i(b^*_i)(b_i^* - b_i')
= ∑_ja_ij/p^*_jb^*_ij - ∑_ja_ij/p^*_jb'_ij,
which implies ∑_ja_ij/p^*_jb'_ij≤a_ij/p^*_jb^*_ij, as required.
To complete the other direction of the proof, consider b^*_i for which the expression stated in the lemma is true for all b'_i. Then, fix any such b'_i and again consider the restriction of to [b'_i, b^*_i]. By direct calculation, as before but in the inverse direction, it holds that f(1) ≥ 0, and as and f(ξ) are strictly concave, it thus must be that ξ f(ξ) is monotone decreasing in ξ. Thus, for all ξ we have ξf(ξ) ≥ξ f(1) ≥ 0. This must mean that ξ=1 is the maximizer of f(ξ) since for all ξ, ξf(ξ) ≥ 0 implies that f(ξ) is monotone increasing and therefore (b^*_i) ≥(b'_i). Finally, note that this holds for any b'_i and hence b^*_i must be a global maximum of .
Let (c_j)_j∈[m]∈ℝ^m, if there exists x^* ∈Δ^m (the m-dimensional simplex) such that ∀ x ∈Δ^m it holds that ∑_j c_j x_j ≤∑_j c_j x^*_j := α then:
* for all j we have that c_j≤α, and
* if x^*_j > 0 then c_j=α.
* Assume for the sake of contradiction that there exists k with c_k >α then x=e_k (the “one-hot” vector with 1 at the k'th coordinate and 0 in all other coordinates) yields ∑_j c_j x_j = c_k > α = ∑_j c_j x^*_j, a contradiction.
* Assume for the sake of contradiction that there exists k with x^*_k >0 and c_k<α c_k x^*_k<α x^*_k. From (1) we have that c_j ≤α c_j x^*_j ≤α x^*_j, summing the strict inequality with the weak ones over all j yields ∑_j c_j x^*_j < ∑_j α x^*_j = α, a contradiction.
Fix a player i and any bid profile b_-i∈ S_-i of the other players, then the following properties of best-response bids hold in the modified games G̃ and G'.
* The support set of b^*_i, defined as s^*_i={j | b^*_ij > 0}, is equal to the set {j | a_ij > c^* θ_ij}, and for every j ∈ s^*_i we have that a_ij/p^*_j=c^*.
* Best-response bids with respect to the utilities and are equal and unique. That is, in the definition from Lemma <ref> we have b'^*_i = b_i^* (denoted simply as b^*_i).
* Best-response bids are given by b^*_ij=(a_ij/c^* - θ_ij)^+ for a unique constant c^* ∈ (0,m/B_i).
By Lemma <ref>, and are strictly concave in b_i for any fixed b_-i, and so each admits a unique maximizer. To see that they are equal, we use Lemma <ref> and introduce constants c, d to obtain
∀ b_i ∑_j a_ij/p'^*_jb_ij/B_i≤∑_j a_ij/p'^*_jb'^*_ij/B_i = c,
∀ b_i ∑_j ln(a_ij/p^*_j)b_ij/B_i≤∑_j ln(a_ij/p^*_j)b^*_ij/B_i = d,
where p'^*_j = θ_ij + b'^*_ij and p^*_j = θ_ij + b^*_ij.
For the ease of exposition, we assume that θ_ij >0 for all j. All the results stated below remain valid also when θ_ij = 0 for some j.
Proof of (1): Applying Lemma <ref> to each of those inequalities (once with x^*=1/B_ib'^*_i and twice with x^*=1/B_ib^*_i) and denoting the support sets of b'^*_ij, b^*_ij as s'^*,s^*, respectively, we obtain the following. (1) ∀ j ∈ s'^* we have a_ij/p'^*_j = c and ∀ j ∉ s'^* we have a_ij/p'^*_j≤ c. Therefore, ∀ j ∈ s'^* bids are positive and c = a_ij/p'^*_j = a_ij/θ_ij + b'^*_ij < a_ij/θ_ij while ∀ j ∉ s'^* the bids are zero, and a_ij/θ_ij≤a_ij/θ_ij + 0 = a_ij/p'^*_j≤ c, hence s'^*={j | c < a_ij/θ_ij}. (2) By the same argument but with d=ln(a_ij/p̃^*_j), we also have that s^*={j | e^d < a_ij/θ_ij}.
Proof of (2): We will show that c=e^d and thus obtain that the vectors b'^*_i, b_i^* are identical. Assume by way of contradiction that c < e^d, then j∈s^* c < e^d < a_ij/θ_ij j ∈ s'^*, i.e., s^* ⊆ s'^*. For all j ∈s^* it holds that a_ij/p'^*_j = c < e^d = a_ij/p^*_jp^*_j < p'^*_j b^*_ij < b'^*_ij. Now we sum those inequalities over s^* and extend to the support s'^*. By using the subset relation we proved, we obtain a contradiction:
B_i = ∑_j∈s^*b^*_ij < ∑_j∈s^* b'^*_ij≤∑_j∈ s'^* b'^*_ij =B_i.
The case where e^d < c follows similar arguments with inverse roles of s'^*, s^*. Thus, c=e^d and s'^* =s^*, which implies a_ij/p'^*_j = a_ij/p̃^*_j, meaning that the prices are equal as well for all goods purchased. Therefore b'^*_i = b_i^*.
Proof of (3): Finally, observe that for j ∈ s'^* c = a_ij/p'^*_j = a_ij/θ_ij + b'^*_ij b^*_ij=a_ij/c^* - θ_ij, while otherwise b^*_ij=0, and a_ij/c^* - θ_ij≤ 0. For the bounds on c, notice that by definition it is equal to u^*_i/B_i and that u^*_i∈ (0, m), as i can receive as little as almost nothing (by the definition of the allocation mechanism, if i places a bid on a good it will receive a fraction of this good, no matter how tiny) and receive at most (almost) all the goods.
The above Lemma <ref> shows a property of the structure of best-response bids. If we consider all the goods sorted by the parameter a_ij/θ_ij, then the best-response bids are characterized by some value c^* which partitions the goods into two parts: goods that can offer the player a bang-per-buck of value c^* and those that cannot. The former set of goods is exactly the support s^*. When a player increases its bid on some good j, the bang-per-buck offered by that good decreases, so clearly, any good with c^* ≤a_ij/θ_ij cannot be considered in any optimal bundle. Consider the situation where the player has started spending it's money on goods with a_ij/θ_ij > c^*, and the for some goods j and k we have that a_ij/p_j=a_ik/θ_ik, then if the player increases it's bid on j without increasing the bid on k, this means that its bids are not optimal since the player could have received higher bang-per-buck by bidding on k. The optimal option is a `water-filling` one: to split the remaining budget and use it to place bids on both j and k, yielding equal bang-per-buck for both (as Lemma <ref> shows).
With the above lemmas, we are now ready to prove Theorem <ref>.
(Theorem <ref>):
We start by making the following claim.
Claim: The set of Market equilibria is equal to the set of Nash equilibria in the game G'.
Proof:
By definition, b^* is a Nash equilibrium of G' if and only if for every i it holds that b_i^* = max_b_i ∈ S_i(b_i, b^*_-i), where by Lemma <ref>, for any fixed b_-i, the bid profile b_i^* is unique. By Lemma <ref>, we have that for x^*_ij=b^*_ij p_j^* and any other x'_ij=b'_ijp^*_j , (x^*_i) ≥(x'_i), if and only if (X^*, p^*) is a market equilibrium (market clearing and budget feasibility hold trivially). That is, the set of Nash equilibria of the game G' corresponds to the set of market equilibria (i.e., every bid profile b^* which is a market equilibrium must be a Nash equilibrium of G', and vice versa).
Then, by Lemma <ref>, best responses by every player i to any bid profile b_-i of the other players with respect to and with respect to are the same. Therefore, every Nash equilibrium in one game must be a Nash equilibrium in the other. Thus, we have that Nash equilibria of the game G̃ are market equilibria, and vice versa – every market equilibrium must be Nash equilibrium of G̃. Finally, at a Nash equilibrium, no player can unilaterally improve their utility, so no improvement is possible to the potential, and in the converse, if the potential is not maximized, then there exists some player with an action that improves the potential, and so by definition their utility function as well, thus contradicting the definition of a Nash equilibrium.
Therefore, we have that every bid profile that maximizes the potential is a Nash equilibrium of G̃ and a market equilibrium (and vice versa).
§ BEST RESPONSE DYNAMICS
We start with the following characterizations of best-response bids in the games G̃ and G'.
Fix θ_i and let b^*_i be i's best response to θ_i with support s^*. Define c_s = ∑_j∈ s a_ij/B_i+∑_j∈ sθ_ij for every subset s⊆ [m]. Let c^* be as described in Lemma <ref> Then, it holds that c^* = c_s^*≥ c_s for all s⊆[m].
Furthermore, if s^* ⊄s then c_s^* > c_s.
Let b^*_i be a best response to θ_i with support s^*. By lemma <ref> we have that b^*_ij=(a_ij/c^*-θ_ij)^+, by summing over s^* we obtain that B_i=∑_j_∈ s^*a_ij/c^*-θ_ij. Rearranging yields c^*=∑_j∈ s^* a_ij/B_i + ∑_j∈ s^*θ_ij which is c_s^* by definition. Now we prove that c^*=max_s ⊆ [m] c_s.
A key observation to the proof is that, by Lemma <ref>, if j∈ s^* then c^*θ_ij < a_ij and otherwise c^*θ_ij≥ a_ij.
For a set s' distinct from s^* we consider two cases:
Case (1): s^* ⊄s'
Consider a bid profile b'_i that for every good j in s' ∩ s^* (if the intersection is not empty) places a bid higher by ϵ > 0 than b^*_ij and distributes the rest of i's budget uniformly between all other goods in s:
b'_ij =
b^*_ij + ϵ if j ∈ s' ∩ s^*,
B_i - ∑_j ∈ s' ∩ s^* (b^*_ij + ϵ)/|s' ∖ s^*| otherwise.
For ϵ small enough, we have ∑_j∈ s' b'_ij = B_i and the support of b'_i is indeed s'.
For every j ∈ s^* ∩ s' we have b'_ij > b^*_ij and by adding θ_ij to both sides we obtain p'_j > p^*_j; multiplying both sides by c^* yields (i) c^* p'_j > c^* p^*_j = a_ij, where the equality is by Lemma <ref>, while for every j ∈ s' ∖ s^* it holds that c^* θ_ij≥ a_ij by which adding c^* b'_ij to the left hand side only increases it and implies (ii) c^* p'_j > a_ij. Summing over inequalities (i) and (ii) for all j appropriately, we obtain c^* ∑_j∈ s' p'_j > ∑_j∈ s' a_ij, observe that ∑_j∈ s' p'_j = ∑_j∈ s' (b'_ij+θ_ij)=B_i + ∑_j∈ s'θ_ij, and thus by division, we obtain the result: c^* > ∑_j∈ s' a_ij/B_i + ∑_j∈ s'θ_ij = c_s'.
Case (2): s^* ⊂ s'
In this case, the idea used above can not be applied since adding ϵ to every bid b^*_ij would create bids b'_ij that exceed the budget B_i.
As stated above, the equality c^* = ∑_j∈ s^* a_ij/B_i + ∑_j∈ s^*θ_ij holds where the sums are taken over all members of s^*, by rearranging we get c^*B_i + c^* ∑_j∈ s^*θ_ij=∑_j∈ s^* a_ij. For all j ∈ s' ∖ s^* it holds that c^* θ_ij≥ a_ij and by summing those inequalities for all j and adding the equality above we obtain: c^*B_i + c^* ∑_j∈ s'θ_ij≥∑_j∈ s' a_ij Rearranging yields the result: c^* ≥∑_j∈ s' a_ij/B_i + ∑_j∈ s'θ_ij = c_s'.
And so, c^* is obtained as the maximum over all c_s, as required.
The function BR_i:S_-i→ S_i which maps b_-i to the best response b_i^* is continuous.
By Lemma <ref>, best-response bids are given by b^*_ij=max{a_ij/c^* - θ_ij, 0}, with support s^*_i. We wish to show that b^*_i is a continuous in b_-i. We do so by showing that b^*_ij is obtained by a composition of continuous functions. As θ_i is a sum of elements from b_-i, it suffices to prove continuity in the variable θ_i. The expression for b^*_ij is the maximum between zero and a continuous function of θ_ij, which is continuous in θ_i, and so we are left to prove that a_ij/c^* - θ_ij is continuous in θ_i. More specifically, it suffices to show that c^* as defined in Lemma <ref> is continuous in θ_i.
By Lemma <ref>, c^* is obtained as the maximum over all c_s functions, where each is a continuous function itself in θ_i, and thus c^* is continuous in θ_i.
To prove Theorem <ref> on the convergence of best-response dynamics we use the following known result (for further details, see Jensen 2009 <cit.>).
Theorem (Jensen 2009 <cit.>): Let G be a best-reply potential game with single-valued, continuous best-reply functions and compact strategy sets. Then any admissible sequential best-reply path converges to the set of pure strategy Nash equilibria.
(Theorem <ref>):
G̃ is a potential game, which is a stricter notion than being a best-reply potential game (i.e., every potential game is also a best-reply potential game). By Lemma <ref>, best replies are unique, and so the function BR_i is single valued. Furthermore, Lemma <ref> shows that it is also a continuous function. By definition, every i's strategy set S_i is compact. Admissibility of the dynamics is also guaranteed by the liveness constraint on adversarial scheduling of the dynamics, and thus by the theorem cited above, best-reply dynamics converges to the set of Nash equilibria of G̃.
Since every element in this set is market equilibrium (by Theorem <ref>) and equilibrium prices are unique (see the model section in the main text), we have that any dynamic of the prices are guaranteed to converge to equilibrium prices. Furthermore if the market is generic then there is a unique market equilibrium (by Theorem <ref>) and convergence to the set in fact means convergence to the point b^*, the market-equilibrium bids.
(Proposition <ref>):
Fix a player i, fix any bid profile b_-i of the other players and let b^*_i be i's best response to b_-i, by Lemma <ref>, b^*_ij=(a_ij/c^*-θ_ij)^+ for c^* being a unique constant. We present a simple algorithm (Algorithm <ref>) which computes c^* and has a run-time of 𝒪(mlog(m)).
To see that this process indeed reaches c^*, assume w.l.o.g. that the goods are sorted by a_ij/θ_ij in a descending order. For ease of exposition, assume θ_ij > 0 for all j; the case with θ_ij = 0 for some goods is similar. By Lemma <ref> we have s^* = {j| a_ij > c^* θ_ij}. And so, if k < j and j∈ s^* then k∈ s^*, since in this case a_ik/θ_ik >a_ij/θ_ij > c^*. Therefore, s^* must be one of the following sets: [1], [2], [3], …, [m]. By Lemma <ref> we have c^*=max_s ⊆ [m] c_s. For any set mentioned, the algorithm computes c_s=∑_j∈ s a_ij/B_i + ∑_j∈ sθ_ij and finds the maximal among all such c_s. Therefore it finds c^*.
As for the running time of the algorithm, it is dominated by the running time of the sorting operation which is 𝒪(mlog(m)).
After proving that the best response to a bid profile can be computed efficiently, we can prove now that proportional response, applied by a single player while all the other players' bids are held fix, converges in the limit to that best response.
(Proposition <ref>):
Fix a player i and fix any bid profile b_-i of the other players, let b^*_i be the best response of i to b_-i with support s^* and let (b^t_i)_t=1^∞ be a sequence of consecutive proportional responses made by i. That is, b^t+1_i = f_i(b^t_i). We start the proof with several claims proving that any sub-sequence of (b^t_i)_t=1^∞ cannot converge to any fixed point of f_i other than b^*_i. After establishing this, we prove that the sequence indeed converges to b^*_i.
Claim 1: Every fixed point of Proportional Response Dynamic has equal `bang-per-buck` for all goods with a positive bid. That is, if b^**_i is a fixed point of f_i then a_ij/p^**_j=u^**_i/B_i for every good j with b^**_ij > 0, where u^**_i is the utility achieved for i with the bids b^**_i.
Proof: By substituting b^**_i into the PRD update rule, we have
b^**_i = f_i(b^**_i)
∀ j b^**_ij = a_ij/p^**_j/u^**_i/B_i b^**_ij
either b^**_ij = 0 or a_ij/p^**_j = u^**_i/B_i.
Claim 2: The following properties of b^*_i hold.
* Except b^*_i, there are no other fixed points of f_i with a support that contains the support of b^*_i. Formally, there are no fixed points b^**_i ≠ b^*_i of f_i with support s^** such that s^* ⊂ s^**.
* The bids b^*_i achieve a higher utility in the original game G, denoted u^*_i, than any other fixed point of Proportional Response Dynamics. Formally, let b^**_i be any fixed point other than b^*_i, with utility u^**_i in the original game G, then u^*_i > u^**_i.
Proof:
Let b_i be any fixed point of f_i. By the previous claim it holds that a_ij/p_j = u_i/B_i whenever b_ij > 0. Multiplying by p_j yields a_ij =u_i/B_i p_j. Summing over j with b_ij>0 and rearranging yields u_i/B_i = ∑_j∈ s a_ij/∑_j∈ s p_j = c_s as defined in Lemma <ref> with support s. By that lemma, we have that c_s^*≥ c_s for any set s distinct from s^*. Thus, we have that u^*_i/B_i = c_s^*≥ c_s^** = u^**_i/B_i for b^**_i being a fixed point of f_i other than b^*_i with support s^** and utility value u^**_i.
Assume for the sake of contradiction that s^*⊂ s^**. If j∈ s^* then j∈ s^**. By Claim 1 for every such j the following inequality holds,
a_ij/p^*_j = u^*_i/B_i≥u^**_i/B_i = a_ij/p^**_j,
implying that p^**_j ≥ p^*_j. Subtracting θ_ij from both sides yields b^**_ij≥ b^*_ij. Summing over j∈ s^* yields a contradiction:
B_i = ∑_j∈ s^* b^*_ij≤∑_j∈ s^* b^**_ij < ∑_j∈ s^** b^**_ij = B_i,
where the first inequality is as explained above, and the last by the strict set containment s^* ⊂ s^**.
Finally, as there are no fixed points with support s^** containing s^*, by Lemma <ref>, the inequality stated above is strict, that is c_s^* > c_s^** and so u^*_i > u^**_i.
Claim 3: If b^**_i ≠ b^*_i is a fixed point of f_i then b^**_i is not a limit point of any sub-sequence of (b^t_i)_t=0^∞.
Proof:
The proof considers two cases:
(1) When u_i is continuous at b^** (2) when continuity doesn't hold.
Let (b^t_k)_k=1^∞ be a converging subsequence of (b^t_i)_t=0^∞.
Case (1):
The utility function u_i is continuous at b^**_i when for every good j it holds that θ_ij > 0 or b^**_ij > 0. i.e. that there is no good j with both θ_ij = 0 and b^**_ij = 0. This is implied directly from the allocation rule x_ij=b_ij/θ_ij + b_ij (see the formal definition in Section <ref>) and the fact that u_i=∑_j a_ijx_ij.
Examine the support of b^**_i, by Claim 2 there are no fixed points with support set s^** containing s^*. Therefore s^* ⊄s^** implying that there exists a good j∈ s^* ∖ s^**. That is, by definition of the supports, there exists j with b^*_ij >0 and b^**_ij = 0. Consider such j and assume for the sake of contradiction that b^**_i is indeed a limit point. Then, by definition, for every δ^**>0 exists a T s.t. if t> T then b^t_k_i - b^**_ij< δ^**. Specifically it means that |b^t_k_ij - b^**_ij| < δ^** whenever t>T.
By Claim 2, u^*_i > u^**_i. Then, by continuity there exists a δ' s.t. if b^**_i - b_i< δ' then |u_i(b_i) - u^**_i| < u^*_i - u^**_i.
Take δ^** < min{δ', b^*_ij} and, by the assumption of convergence, there is a T s.t. for t > T and we have that b^t_k_i - b^**_i < δ^**. This implies
(I) |b^t_k_ij - 0| < δ^**< b^*_ij as b^**_ij=0 and (II) |u^t_k_i - u^**_i| < u^*_i - u^**_i u^t_k_i < u^*_i. From these two, we can conclude that
a_ij/p^t_k_j = a_ij/θ_ij + b^t_k_ij > a_ij/θ_ij + b^*_ij = u^*_i/B_i > u^t_k_i/B_i.
Finally, observe that by rearranging the PRD update rule we get b^t+1_ij = a_ij/p^t_k_j/u^t_k_i/B_i b^t_k_ij, implying that b^t_k+1_ij > b^t_k_ij since a_ij/p^t_k_j/u^t_k_i/B_i > 1 for t> T and b^0_ij > 0. This means that for all t_k> T we have b^t_k_ij > b^T+1_ij. That is, b^t_k_ij cannot converge to zero and thus the subsequence thus cannot converge to b^**_i, a contradiction.
Case (2): When there exists a good j with θ_ij = 0 and b^**_ij = 0 we have that u_i is not continuous at b^**_i and the previous idea doesn't work. Instead we will contradict the PRD update rule. Assume of the sake of contradiction that b^**_i is a limit point of a subsequence of PRD updates. Therefore for every ϵ exists a T s.t. if t_k>T then |b^t_k_ij-b^**_ij|<ϵ. Note that b^**_ij=0 in this case and set ϵ < a_ij/mB_i and so, for t_k>T it holds that a_ij/b^t_k_ij> m/B_i. Also note that p^t_k_j = θ^t_k_ij + b^t_k_ij = b^t_k_ij and that the maximal utility a buyer may have is m (when it is allocated every good entirely). Then overall we have that a_ij/p^t_k_j>m/B_i > u^t_k_i/B_i.
The PRD update rule is b^t_k+1_ij=a_ij/p^t_k_j/u^t_k_i/B_ib^t_k_i. But since the ratio a_ij/p^t_k_ij/u^t_k_i/B_i is greater than 1 it must be that b^t_k+1_ij> b^t_k_ij. And so every subsequent element of the subsequence is bounded below by b^T+1_ij>0 and as before, we reach a contradiction as the subsequence cannot converge to b^**_i.
Finally we can prove the convergence of the sequence (b^t_i)_t=1^∞. As the action space S_i is compact, there exists a converging subsequence b^t_k_i with the limit b^**_i. If b^**_i = b^*_i for any such subsequence, then clearly we are done. Otherwise, assume b^**_i ≠ b^*_i. By the previous claim any fixed point of f_i other than b^*_i is not a limit point of any subsequence, thus b^**_i is not a fixed point of f_i. By Lemma <ref>, any subset of players performing proportional response, strictly increase the potential function unless performed at a fixed point. When discussing a proportional response of a single player, with all others remaining fixed, this implies, by the definition of potential function, that is increased at each such step. Let ϵ < (f_i(b^**)) - (b^**), this quantity is positive since b^**_i is not a fixed point. The function ∘ f_i is a continuous function and b^t_k_i converges to b^**_i therefore there exists a T such that for all t_k > T we have that |(f_i(b^**)) - (f_i(b^t_k))| < ϵ. Substituting ϵ yields (f_i(b^**)) - (f_i(b^t_k)) < (f_i(b^**)) - (b^**) which implies (b^**) < (f_i(b^t_k)) = (b^t_k+1) ≤(b^t_k+1). That is, the sequence u_i(b^t_k_i) is bounded away from (b^**) and since is a continuous function, this implies that b^t_k_i is bounded away from b^**_i — a contradiction to convergence.
§ SIMULTANEOUS PLAY BY SUBSETS OF AGENTS
In order to prove Lemma <ref>, we first need some further definitions and technical lemmas. We use the notation D(xy) to denote the KL divergence between the vectors x and y, i.e., D(xy)=∑_j x_j ln(x_j/y_j).
For a subset of the players v⊆ℬ, the subscript v on vectors denotes the restriction of the vector to the coordinates of the players in v, that is, for a vector b we use the notation b_v=(b_ij)_i∈ v, j∈ [m] to express the restriction to the subset. ℓ_Φ(b_v;b'_v) denotes the linear approximation of Φ;
that is, ℓ_Φ(b_v;b'_v)=Φ(b'_v) + ∇_b_vΦ(b'_v)(b_v-b'_v).
The idea described in the next lemma to present the potential function as a linear approximation term and a divergence term was first described in <cit.> for a different scenario when all agents act together in a synchronized manner using mirror descent; we extend it to any subset of players which required us to introduce different proof methods and as well as to embed it in a game.
Fix a subset of the players v ⊂ℬ and a bid profile b_-v of the other players. Then, for all b_v, b'_v ∈ S_v we have that Φ(b_v) = ℓ_Φ(b_v; b'_v) - D(p p'), where p=∑_i∉ v b_ij + ∑_i∈ v b_ij and p'=∑_i∉ v b_ij + ∑_i∈ v b'_ij.
Calculating the difference Φ(b_v) - ℓ_Φ(b_v;b'_v) yields
Φ(b_v) - ℓ_Φ(b_v;b'_v) = Φ(b_v) - Φ(b'_v) - ∇_b_vΦ(b'_v)(b_v - b'_v).
We rearrange the term Φ(b_v) - Φ(b'_v) as follows.
Φ(b_v) - Φ(b'_v) = ∑_i∈ v, j∈[m] (b_ij - b'_ij)ln(a_ij)-∑_j(p_jln(p_j)-p'_jln(p'_j)) -∑_j(p_j-p'_j)
= ∑_i∈ v, j∈[m] (b_ij - b'_ij)ln(a_ij)-∑_j(p_jln(p_j)-p'_jln(p'_j)),
where the last equality is since ∑_j p_j = 1 for any set of prices because the economy is normalized (see the model section in the main text).
The term ∇_b_vΦ(b'_v)(b_v - b'_v) is expanded as follows.
∇_b_vΦ(b'_v)(b_v - b'_v) = ∑_i∈ v, j∈[m]ln(a_ij/p'_j)(b_ij-b'_ij)
=∑_i∈ v, j∈[m]ln(a_ij)(b_ij-b'_ij) - ∑_i∈ v, j∈[m]ln(p'_j)(b_ij-b'_ij)
Subtracting the latter from the former cancels out the term ∑_i∈ v, j∈[m]ln(a_ij)(b_ij-b'_ij), and we are left with the following.
Φ(b_v) - ℓ_Φ(b_v;b'_v) = Φ(b_v) - Φ(b'_v) - ∇_b_vΦ(b'_v)(b_v - b'_v)
=∑_i∈ v, j∈[m]ln(p'_j)(b_ij-b'_ij) -∑_j(p_jln(p_j)-p'_jln(p'_j))
= -∑_jp_jln(p_j)-(p'_j - ∑_i ∈ v b'_ij + ∑_i ∈ v b_ij)ln(p'_j)
= -∑_jp_jln(p_j)-(θ_vj + ∑_i ∈ v b_ij)ln(p'_j)
= -∑_j p_jln(p_j) - p_jln(p'_j)
=-D(pp').
For any subset of the players v ⊂ℬ and any bid profile b_-v of the other players and for every b_v, b'_v ∈ S_v it holds that D(pp') ≤ D(b_vb'_v), with equality only when b_v=b'_v.
We begin by proving a simpler case where v={i} for some player i and use it to prove the more general statement. Fix i and b_-i, which implies fixing some θ_i. KL divergence is convex in both arguments with equality only if the arguments are equal; formally, for λ∈ (0,1) it holds that D(λθ_i + (1-λ)b_iλθ_i + (1-λ)b'_i) ≤λ D(θ_i θ_i) + (1-λ) D(b_i b'_i), which is equivalent to D(λθ_i + (1-λ)b_iλθ_i + (1-λ)b'_i) ≤ (1-λ) D(b_i b'_i), with equality only if b_i=b'_i (since D(θ_i θ_i) = 0). Substituting λ=1/2 and noting that p_j=θ_ij + b_ij (and the same for p'_j and b'_ij), we obtain the following relation.
D(1/2 p 1/2 p') = D(1/2θ_i + 1/2b_i 1/2θ_i + 1/2b'_i)
≤1/2 D(b_i b'_i).
On the other hand, the expression D(1/2 p 1/2 p') can be evaluated as follows.
D(1/2 p 1/2 p') = ∑_j 1/2p_jln(1/2p_j/1/2p'_j)
=1/2∑_j p_jln(p_j/p'_j)
=1/2 D(p p').
And therefore, we have
D(p p') ≤ D(b_i b'_i), with equality only if b_i = b'_i.
Now we can prove the general case, as stated fix v and b_-v and let b_v, b'_v∈ S_v. We know that for all i∈ v it is true that D(pp')≤ D(b_i b'_i), summing those inequalities for all i∈ v yields |v|D(pp')≤∑_i∈ v D(b_i b'_i), on the one hand clearly D(pp') ≤ |v|D(pp') and on the other hand ∑_i∈ v D(b_i b'_i) = ∑_i∈ v∑_j b_ijln(b_ij/b'_ij) = D(b_v b'_v) and the result is obtained.
Let v⊆ℬ, let f_v:S→ S be a proportional response update function for members of v and identity for the others, and let b'∈ S be some bid profile. Then, (f_v(b'))_v=max_b_v∈ S_v{ℓ_Φ(b_v; b'_v) - D(b_v b'_v) }.
By adding and removing constants that do not change the maximizer of the expression on the right hand side, we obtain that the maximizer is exactly the proportional response update rule:
max_b_v∈ S_v{ℓ_Φ(b_v; b'_v) - D(b_v b'_v) } = max_b_v∈ S_v{Φ(b'_v) + ∇_b_vΦ(b'_v)(b_v-b'_v) - D(b_v b'_v) }
= max_b_v∈ S_v{∇_b_vΦ(b'_v)b_v - D(b_v b'_v) }
= max_b_v∈ S_v{∇_b_vΦ(b'_v)b_v - D(b_v b'_v) - ∑_i∈ v B_i ln(u'_i/B_i)}.
Rearranging the last expression by elements yields the following result,
∇_b_vΦ(b'_v)b_v - D(b_v b'_v) - ∑_i∈ v B_i ln(u'_i/B_i) =
= ∑_i∈ v, j∈ [m] b_ijln(a_ij/p'_j) - ∑_i∈ v, j∈ [m] b_ijln(b_ij/b'_ij)- ∑_i∈ v, j∈[m] b_ijln(u'_i/B_i)
=∑_i ∈ v, j ∈ [m] b_ijln(a_ij/p'_jb'_ij/b_ijB_i/u'_i)
=∑_i ∈ v, j ∈ [m] b_ijln(a_ijx'_ij/u'_iB_i/b_ij)
=-∑_i ∈ v, j ∈ [m] b_ijln(b_ij/a_ijx'_ij/u'_iB_i),
which is exactly -D(b_v (f_v(b'))_v), since (f_v(b'))_ij = a_ijx'_ij/u'_iB_i for i∈ v by definition. That is, our maximization problem is equivalent to min{ D(b_v (f_v(b'))_v) }. Finally, note that KL divergence is minimized when both of its arguments are identical, and (f_v(b'_v))_v ∈ S_v, the domain of the minimization.
(Lemma <ref>):
Let v ⊆ℬ be a subset of players and let b ∈ S be some bid profile. By combining the lemmas proved in this section have that
Φ(f_v(b)) ≥ℓ_Φ(f_v(b); b) - D(f_v(b) b)
≥ℓ_Φ(b; b) - D(b b)
= Φ(b),
where the first inequality is by Lemmas <ref> and <ref> with the inequality being strict whenever f_v(b) ≠ b, and the second inequality is by Lemma <ref>, as f_v(b) was shown to be the maximizer of this expression over all b∈ S.
An interesting case to note here is when v=i. In this case, the lemmas above show that if the players' bids are b^t and i is being activated by the adversary, then the best response bids of i to b^t_-i are the solutions to the optimization problem max_b_i∈ S_i{ℓ_(b_i;b^t_i) - D(pp^t)}. On the other hand, the proportional response to b^t_-i is the solution to the optimization problem max_b_i∈ S_i{ℓ_(b_i;b^t_i) - D(b_ib^t_i)}. This can be seen as a relaxation of the former, as proportional response does not increase (or equivalently the potential) as much as best response does. However, proportional response is somewhat easier to compute.
§ GENERIC MARKETS
(Theorem <ref>): Assume by way of contradiction that a generic linear Fisher market has two distinct market equilibrium bid profiles b^* ≠ b^**. For any market equilibrium b it must hold that: (1) ∀ j ∑_i b_ij = p^*_j since equilibrium prices are unique; and (2) ∀ i ∑_j b_ij = B_i by budget feasibility.
As b^* ≠ b^**, there exists a pair (i,j) with b^*_ij≠ b^**_ij, meaning that buyer i has a different bid on good j between b^* and b^**, and so by (1) it must be that exists a buyer k whose bid on good j was also changed so that the price p^*_j remains fixed; formally, b^*_kj≠ b^**_kj. In such case, by (2) there must be a good ℓ for which buyer k has a different bid as well, since it's budget B_k is fixed and fully utilized; formally b^*_kℓ≠ b^**_kℓ. As the graph Γ={ℬ∪𝒢, E} with E={{i,j} | b'_ij≠ b^*_ij} is finite, following the process described above while obeying the constraints (1) and (2) must lead to a cycle in the graph Γ.
Finally, we will show that there exists a market equilibrium with a cycle in its corresponding graph. Define b'=λ b^* + (1-λ) b^** for some λ∈ (0,1) and note that b' is also market equilibrium as the set of market equilibria is a convex set (see the model section in the main text). Let Γ(b')={ℬ∪𝒢, E(b')} with E(b')={{i,j} | b'_ij > 0} be the corresponding graph of b'. Observe that E ⊆ E(b') since if b^*_ij≠ b^**_ij then it must be that b^*_ij > 0 or b^**_ij > 0 and in any such case b'_ij > 0.
Thus, the graph Γ(b') contains a cycle, contradicting Lemma <ref> from the main text
(Lemma <ref>): Assume for the sake of contradiction that exists a cycle C in Γ(b^*), w.l.o.g. name the vertices of buyers and goods participating in the cycle in an ascending order; that is, C=b_1g_1b_2g_2… b_k-1g_kb_1, where b_i and g_i represent buyers and goods i, respectively. Recall that for any market equilibrium if x^*_ij >0 then a_ij/p^*_j = c_i for some constant c_i (see the model section in the main text). Applying this to the cycle C yields the following equations. (1) By considering edges from buyers to goods b_i → g_i we obtain for i ∈ [k-1] a_i,i = c_i p^*_i; and (2) by considering edges from goods to buyers g_i → b_i+1 we obtain for i ∈ [k-1] a_i+1,i= c_i+1 p_i^* and the edge closing the cycle yields a_1,k= c_1 p_k^*. Finally, by considering the product of ratios between valuations of buyers participating in the cycle we have the following condition.
a_21/a_11a_32/a_22a_43/a_33…a_i+1,i/a_i,i…a_k,k-1/a_k-1,k-1a_1,k/a_k,k = c_2 p^*_1/c_1 p^*_1c_3 p^*_2/c_2 p^*_2c_4 p^*_3/c_3 p^*_3…c_i+1 p^*_i/c_i p^*_i…c_k p^*_k-1/c_k-1 p^*_k-1c_1 p^*_k/c_k p^*_k
= c_2/c_1c_3/c_2c_4/c_3…c_i+1/c_i…c_k/c_k-1c_1/c_k
= 1,
which contradicts the genericity of the market.
§ CONVERGENCE OF ASYNCHRONOUS PROPORTIONAL RESPONSE DYNAMICS
(Theorem <ref>): Assume that Φ: [0,1]^n→ℝ_≥ 0 is some continuous function with a single maximum point b^* (as is the case with out potential). We start with the following lemma.
For every ϵ>0 there exists δ>0 such that
Φ(b) > Φ(b^*) - δ
implies |b-b^*|<ϵ.
Assume otherwise that for some ϵ_0 there exist a sequence (b_t) such that Φ(b_t)→Φ(b^*) but |b_t-b^*| ≥ϵ_0 for all t. Take a condensation point b^** of this sequence and a subsequence (t_j) that converges to b^**. We have Φ(b^**)= limΦ(b_t_j) = Φ(b^*) and |b^**-b^*| = lim |b_t_j-b^*| ≥ϵ_0 > 0. The former equality must imply b^*=b^**, but the latter implies b^* b^**.
Next, for a subset of players A ⊂ℬ let f_A: [0,1]^n → [0,1]^n be the continuous function where j ∈ A do a proportional response update and the other players play the identity function.
(i.e., do not change their bids, see Section <ref> in the main text).
By Lemma <ref> from the main text we have that
(i) For all A we have that f_A(b)=b if and only if for all i ∈ A it holds that f_i(b)=b; and
(ii) Φ(f_A(b)) > Φ(b) unless f_A(b)=b.
The stable set of b^** is defined to be S(b^**)={i | f_i(b^**)=b^**}.
A corollary (i) and (ii) above is that if A ⊆ S(b^**) then f_A(b^**)=b^**, but if A ∖ S(b^**) ∅ then Φ(f_A(b^**)) > Φ(b^**).
: Let ϕ(b^**) < Φ(b^*). Then there exists δ>0 such that for every |b-b^**|≤δ and every A ∖ S(b^**) ∅ we have that Φ(f_A(b)) > Φ(b^**).
Fix a set A such that A ∖ S(b^**) ∅ and let α = Φ(f_A(b^**)) - Φ(b^**) >0. Since Φ(f_A(·)) is continuous, there exists δ so that |b-b^**|≤δ implies Φ(f_A(b^**)) - Φ(f_A(b)) < α and thus Φ(f_A(b)) > Φ(b^**). Now take the minimum δ for all finitely many A.
Let Φ(b^**) < Φ(b^*) and let F be a finite family of continuous functions such that for every f ∈ F we have that f(b^**)=b^**. Then there exists ϵ>0 such that for every b such that |b-b^**|≤ϵ and every f ∈ F and every A ∖ S(x^**) ∅ we have that Φ(f_A(f(b))) > Φ(b^**).
Fix f ∈ F and let δ be as promised by the previous lemma, i.e. for every |z-b^**|≤δ and every A ∖ S(b^**) ∅ we have that Φ(f_A(z)) > Φ(b^**). Since f(b^**)=b^** and f is continuous there exists ϵ >0 so that |b-b^**| ≤ϵ implies |f(b)-f(b^**)| = |f(b)-b^**|≤δ and thus Φ(f_A(f(b))) > Φ(b^**). Now take the minimum ϵ over the finitely many f ∈ F.
a sequence of sets A_t ⊆ℬ is called T-live if for every i and for every t there exists some t ≤ t^* ≤ t+T such that i ∈ S_t^*.
Fix a sequence b = (b_t) where b_t+1 = f_A_t(b_t) such that the sequence A_t is T-live. Then it holds that
lim_t→∞ b_t = b^*.
Otherwise there exists a subsequence that converges to some other b^** where Φ(b^**)<Φ(b^*). Notice that as Φ(b_t) is increasing then Φ(b_t) ≤Φ(b^**) for all t.
Let F be a set of functions achieved by composition of at most T functions from {f_A | A ⊂ S(b^**)}.
So for every f ∈ F we have that f_A(b^**)=b^**, while for every B = A ∖ S(b^**) ∅ we have that Φ(f_B(b^**)) > Φ(b^**).
Let ϵ be as promised by the previous lemma, i.e., for every |b-b^**|≤ϵ and every f ∈ F and every A such that A ∖ S(b^**) ∅ we have that Φ(f_A(f(b))) > Φ(b^**). Since the subsequence converges to b^** there exists t_j in the subsequence so that |b_t_j-b^**| ≤ϵ. Now let t>t_j be the first time that A_t ∖ S(b^**) ∅. Now b_t+1 = f_A_t(f(b_t_j), where f is the composition of all f_A for the times t_j to t. We can now apply the previous lemma to get that Φ(b_t+1) = Φ(f_A_t(f(b_t_j)) > Φ(b^**) a contradiction.
The last lemma concludes our proof of Theorem <ref>.
|
http://arxiv.org/abs/2307.04062v1 | 20230708235140 | CR compactification for asymptotically locally complex hyperbolic almost Hermitian manifolds | [
"Alan Pinoy"
] | math.DG | [
"math.DG",
"53C21, 53C35, 53C55, 58J60"
] |
In this article, we consider a complete, non-compact almost Hermitian manifold whose curvature is asymptotic to that of the complex hyperbolic plane.
Under natural geometric conditions, we show that such a manifold arises as the interior of a compact almost complex manifold whose boundary is a strictly pseudoconvex CR manifold.
Moreover, the geometric structure of the boundary can be recovered by analysing the expansion of the metric near infinity.
Symmetry energy and neutron star properties
constrained by chiral effective field theory calculations
Achim Schwenk
======================================================================================================
§ INTRODUCTION
The complex hyperbolic space is the unique simply connected, complete, Kähler manifold of constant negative holomorphic sectional curvature (we adopt the convention that this constant is -1).
It is the complex analogue of the real hyperbolic space, and similarly to its real counterpart, the complex hyperbolic space can be compactified by a sphere at infinity.
This sphere at infinity carries a natural geometric structure, which is closely related to the Riemannian geometry of the complex hyperbolic space: their respective groups of automorphisms are in one-to-one correspondence.
This structure is that of a strictly pseudoconvex CR manifold, namely, the CR sphere (𝕊,H,J).
If 𝕊 is thought of as the unit sphere of ^N, then H = (T𝕊)∩ (iT𝕊) is the standard contact distribution, and J is given by the multiplication by i in H.
Set ρ = e^-r with r the distance function to a fixed point.
Then ρ is a defining function for the boundary of the above compactification, and as ρ→ 0, the complex hyperbolic metric has the asymptotic expansion
1/ρ^2ρ⊗ρ + 1/ρ^2θ⊗θ + 1/ργ + o(1),
with θ the standard contact form of 𝕊, and γ = θ|_H× H(·,J·) the associated Levi-form.
The strict pseudoconvexity of the boundary means that the Levi-form is positive definite on H.
The aim of this paper is to construct a similar compactification by a strictly pseudoconvex CR structure for complete, non-compact, almost Hermitian manifolds satisfying some natural geometric conditions.
These conditions are the existence of a convex core (called an essential subset), the convergence of the curvature tensor R to that of the complex hyperbolic space R^0 near infinity, and the fact that the underlying almost complex structure J is asymptotically Kähler at infinity.
More precisely, we show the following.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of real dimension at least 4, which admits an essential subset.
Let r be the distance function to any compact subset.
Assume that there exists a > 1 such that
R-R^0_g, ∇ J_g, ∇ R_g, and ∇^2 J_g = 𝒪(e^-ar).
Then (M,J) is the interior of a compact almost complex manifold (M̅,J̅), whose underlying almost complex structure J̅ is continuous.
The hyperplane distribution H_0 = (T∂M̅)∩ (J̅T∂M̅) and the restriction J_0 = J̅|_H_0 are of class 𝒞^1.
Moreover, H_0 is a contact distribution, and J_0 is formally integrable, and (∂M̅,H_0,J_0) is a strictly pseudoconvex CR manifold.
In addition, the metric g is asymptotically complex hyperbolic: there exists a defining function ρ for the boundary, a 𝒞^1 contact form η^0 calibrating H_0, and a continuous Carnot metric γ, with η^0 and γ^0 = γ|_H_0× H_0 > 0 of class 𝒞^1, such that
g ρ→ 0=1/ρ^2ρ⊗ρ + 1/ρ^2η^0⊗η^0 + 1/ργ +
𝒪_g(ρ^a-1) if 1 < a < 3/2,
𝒪_g(ρ^1/2lnρ) if a = 3/2,
𝒪_g(ρ^1/2) if a > 3/2.
The contact form and the Carnot metric are related by the relation η^0|_H_0× H_0(·,J_0·) = γ^0.
This result gives a geometric characterisation of complete, non-compact, almost Hermitian manifolds admitting a compactification by a strictly pseudoconvex CR structure.
Notice the similarity between equations (<ref>) and (<ref>).
The real analogue of this result, involving a compactification by a conformal boundary for asymptotically locally real hyperbolic manifolds, has been proven by E. Bahuaud, J. M. Lee, T. Marsh and R. Gicquaud <cit.>, pursuing the seminal work of M. T. Anderson and R. Schoen <cit.>.
In a previous paper <cit.>, the author proved a similar result in the Kähler case.
The improvement here is twofold.
First, we are able to remove the Kähler assumption, which was of great importance in the previous proof.
Here, the almost complex structure is no more assumed to be parallel, and in fact, needs not even be formally integrable, nor the associated almost symplectic form needs to be closed.
In particular, the result applies to perturbations of asymptotically complex hyperbolic Kähler metrics which are only almost Hermitian.
Second, the strict pseudoconvexity of the boundary is obtained with an exponential decay of order a > 1, while the earlier version of this result needed a decay of order a > 3/2.
Note that this has a cost: the Carnot metric can be shown to be 𝒞^1 only in the direction of the contact distribution.
This is the reason why the extended almost complex structure J̅ is only continuous in the transverse direction.
Both improvements imply that the set of examples to which the result applies is much increased.
A compactification by a CR structure for some complete, non-compact, Kähler manifolds was already given by J. Bland <cit.>, under assumptions that are rather analytic and not totally geometric.
To obtain a continuous compactification with no regularity on the CR structure, these assumptions imply the a posteriori estimates R-R^0_g, ∇ R_g = 𝒪(e^-4r)[At first, one sees that these assumptions imply that R-R^0_g = 𝒪(e^-3r) and ∇ R_g = 𝒪(e^-4r).
Since on a Kähler manifold it holds that ∇ R^0 = 0, applying Kato's inequality to R-R^0 yields the claimed estimate.].
A strictly pseudoconvex boundary of class 𝒞^1 is obtained under assumptions that imply the even stronger estimates R-R^0_g,∇ R_g,∇^2 R_g = 𝒪(e^-5r).
It was proven by O. Biquard and M. Herzlich <cit.> that for asymptotically complex hyperbolic Kähler-Einstein metrics in real dimension 4, the curvature tensor has the form R = R^0 + Ce^-2r + o_g(e^-2r), where C is a non-zero multiple of the Cartan tensor of the CR boundary.
It is known that the Cartan tensor vanishes exactly when the CR structure is locally equivalent to that of the sphere (such CR manifolds are called spherical).
Many examples are then not covered by J. Bland's results.
The paper is organized as follows.
In Section <ref>, we set up the notations and explain the main idea of the proof of our main Theorem.
In Section <ref>, we compute the expansion of the metric near infinity and prove the existence of the objects η^0 and γ, see Theorem <ref>.
Section <ref> is dedicated to prove the existence of J_0, see Theorem <ref>.
At this step, η^0, γ and J_0 are continuous tensor fields.
We show in Section <ref> that they have higher regularity and that they induce a strictly pseudoconvex CR structure, see Theorems <ref>, <ref> and <ref>.
Finally, we prove our main Theorem in Section <ref>.
§ PRELIMINARIES
§.§ Notations
Let (M,g) be a Riemannian manifold.
Its Levi-Civita connection is denoted by ∇.
Our convention on the Riemann curvature tensor is Besse's convention <cit.>, namely
R(X,Y)Z = -(∇^2_X,Y Z - ∇^2_Y,XZ) = ∇_[X,Y]Z - ∇_X(∇_YZ) + ∇_Y(∇_XZ),
for vector fields X, Y and Z.
By abuse of notation, we still denote by R its four times covariant version: this means that we write R(X,Y,Z,T) = g(R(X,Y)Z,T) for vector fields X, Y, Z and T.
With this convention, the sectional curvature of a tangent plane P with orthonormal basis {u,v} is (P) = (u,v) = R(u,v,u,v).
§.§.§ Essential subsets and normal exponential map
Following <cit.>, an essential subset K ⊂ M is a codimension 0, compact, totally convex submanifold, with smooth boundary which is oriented by a unit outward vector field ν, and such that (M∖ K) < 0.
In that case, the normal exponential map
[ ℰ _+ × ⟶ M̅∖̅ ̅K̅; (r,p) ⟼ exp_p(rν_p) ]
is a diffeomorphism.
The level hypersurface at distance r above K is denoted by _r.
For r ⩾ 0, ℰ induces a diffeomorphism ℰ_r→_r given by ℰ_r(p)=ℰ(r,p); the induced Riemannian metric ℰ_r^*g on is denoted by g_r.
Gauss Lemma states that ℰ^*g = r ⊗ r + g_r.
Note that g_0 = g|_.
The gradient of the distance function r on M̅∖̅ ̅K̅, called the radial vector field, is denoted by .
A radial geodesic is a unit speed geodesic ray of the form r ↦ℰ(r,p) with p∈.
Note that the restriction of to a radial geodesic is its tangent vector field: therefore, satisfies the equation of geodesics =0.
More generally, a vector field X on M̅∖̅ ̅K̅ is called radially parallel if X=0.
The shape operator S is the field of symmetric endomorphisms on M̅∖̅ ̅K̅ defined by SX = ∇_X.
The normal Jacobi field on M̅∖̅ ̅K̅ associated to a vector field v on is defined by Y_v = ℰ_*v.
Such vector fields are orthogonal to and commute with the radial vector field .
They satisfy the Jacobi field equation ( Y_v) = -R(,Y_v), and their restriction to any radial geodesic are thus Jacobi fields.
Normal Jacobi fields are related to the shape operator S by the first order linear differential equation Y_v = SY_v.
§.§.§ Almost Hermitian manifolds
An almost Hermitian manifold (M,g,J) is a Riemannian manifold (M,g) together with an almost complex structure J which is compatible with the metric, in the sense that it induces linear isometries in the tangent spaces: one has g(JX,JY) = g(X,Y) for all vector fields X and Y.
Note that this implies that J is skew-symmetric (in fact, these two properties are equivalent).
A tangent plane P⊂ TM is called J-holomorphic (respectively totally real) if JP=P (respectively JP⊥ P).
The constant -1 J-holomorphic sectional curvature tensor R^0 on (M,g,J) is defined by the equality
R^0(X,Y)Z = 1/4( g(Y,Z)X - g(X,Z)Y + g(JY,Z)JX - g(JX,Z)JY + 2g(X,JY)JZ)
for X, Y and Z vector fields on M.
Similarly to the Riemann curvature tensor, we still denote by R^0 its fully covariant version, meaning that R^0(X,Y,Z,T) = g(R^0(X,Y)Z,T) for all vector fields X, Y, Z and T.
Note that R^0_g ⩽3/2.
For any pair of orthogonal unit tangent vectors u and v, R^0(u,v,u,v) = -1/4(1+3g(Ju,v)^2);
the minimal value -1 (respectively the maximal value -1/4) is achieved precisely when {u,v} spans a J-holomorphic plane (respectively a totally real plane).
In the specific case of the complex hyperbolic space, R^0 coincides with the curvature tensor of the complex hyperbolic metric (see <cit.>).
§.§.§ CR manifolds
A CR manifold (for Cauchy-Riemann) is a triplet (M,H,J) where H is a tangent distribution of hyperplanes and J is an almost complex structure on H, such that the distribution H^1,0 = { X - iJX | X ∈ H}⊂ TM⊗_ is involutive (i.e. [X,Y] is a section of H^1,0 whenever X and Y are).
In this case, J is said to be formally integrable.
A CR manifold is called strictly pseudoconvex if there exists a contact form η calibrating the distribution H (i.e. H=η and η induces a non-degenerate 2-form on H), and if the associated Levi form η|_H× H(·,J·) is positive definite on H.
§.§ The asymptotic conditions
Throughout the paper, (M,g,J) will denote a complete, non-compact, almost Hermitian manifold of dimension 2n+2⩾ 4, with an essential subset K.
We define the following asymptotic geometric conditions.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold.
Let r be the distance function to a compact subset.
* We say that (M,g,J) satisfies the ALCH(ALCH) condition of order a > 0, for asymptotically locally complex hyperbolic[For this condition implies that the local geometry at infinity resembles that of the complex hyperbolic space.], if R-R^0_g = 𝒪(e^-ar).
* We say that (M,g,J) satisfies the AK(AK) condition of order a > 0, for asymptotically Kähler, if ∇ J_g = 𝒪(e^-ar).
Note that R^0_g ⩽3/2.
The condition of order a > 0 implies R_g = 𝒪(1).
One readily verifies that the condition implies that the sectional curvature of M is bounded as follows: -1 + 𝒪(e^-ar) ⩽⩽ - 1/4 + 𝒪(e^-ar).
The lower bound implies the following Lemma, proven in <cit.>.
Assume that (M,g,J) is a complete, non-compact, almost Hermitian manifold, admitting an essential subset K, and satisfying the condition of order a > 0.
Let S = ∇ be the shape operator of the level hypersurfaces above K.
Then one has
S_g ⩽ 1 + 𝒪(e^-ar) if 0 < a < 2,
𝒪((r+1)e^-2r) if a = 2,
𝒪(e^-2r) if a > 2.
In any case, one has S_g = 𝒪(1), and exp(∫_0^r S_g-1) = 𝒪(1).
We also define the following analogous asymptotic conditions of higher order.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold.
Let r be the distance function to a compact subset.
* We say that (M,g,J) satisfies the ALCHplus(ALCH+) condition of order a > 0 if one has the estimates R-R^0_g = 𝒪(e^-ar) and ∇ R_g = 𝒪(e^-ar).
* We say that (M,g,J) satisfies the AKplus(AK+) condition of order a > 0 if one has the estimates ∇ J_g = 𝒪(e^-ar) and ∇^2 J_g = 𝒪(e^-ar).
Under the condition of order a > 0, one has ∇ R^0_g = 𝒪(e^-ar).
Thus, under the condition of order a > 0, Kato's inequality shows that the condition of order a > 0 is equivalent to R-R^0_g r →∞⟶ 0 and ∇(R-R^0)_g = 𝒪(e^-ar).
In practice, r will be the distance function to the essential subset K.
The constants involved in the previous estimates are global.
Moreover, in what follows, all estimates of the form f = 𝒪(h) will involve a constant that is global.
When built out of the choice of a reference frame (which will soon be called an
admissible frame, see Definition <ref>), the constant will be independent of that choice.
By the expressions Y_u_g = 𝒪(u_g_0e^r) or Y_u = 𝒪_g(u_g_0e^r), we mean that there exists C > 0 such that for any vector field u on , one has
∀ r ⩾ 0, ∀ p ∈, (Y_u)_ℰ(r,p)_g ⩽ C u_p_g_0e^r.
§.§ Outline of the proof
If (M,g,J) is assumed to be Kähler (that is, if ∇ J=0), the author showed in a previous paper <cit.> the following result.
[<cit.>]
Let (M,g,J) be a complete, non-compact, Kähler manifold admitting an essential subset K.
Assume that there is a constant a>1 such that the estimates R-R^0_g,∇ R_g=𝒪(e^-ar) hold, where r is the distance function to any compact subset.
Then on , there exist a contact form η of class 𝒞^1, and a continuous symmetric positive bilinear form γ, positive definite on the contact distribution H=η, such that
ℰ^*g = r^2 + e^2rη⊗η + e^r γ + lower order terms.
If moreover a>3/2, then γ is of class 𝒞^1, and there exists a 𝒞^1 formally integrable almost complex structure J_H on H, such that γ|_H× H = η(·, J_H·).
In particular, (,H,J_H) is a strictly pseudoconvex CR manifold.
Notice the similarity between equations (<ref>) and (<ref>) by setting ρ = e^-r.
This result provides a compactification by a strictly pseudoconvex CR structure for a Kähler manifold whose curvature is asymptotically close to that of the complex hyperbolic space.
The proof is quite long, but can be summarised as follows:
* For {Jν,e_1,…,e_2n} an orthonormal frame on , with ν the outward unit normal, let {,E_1,…,E_2n} denotes its parallel transport along radial geodesics.
For r ⩾ 0, define η_r = ℰ_r^*(e^-rg(·,)), and η^j_r = ℰ_r^*(e^-r/2g(·,E_j)), j∈{1,…,2n}, which are local 1-forms on .
* If R-R^0_g = 𝒪(e^-ar), with a > 1/2, then {η_r,η^1_r…,η^2n_r}_r⩾ 0 converges to continuous 1-forms {η,η^1,…,η^2n}.
This implies that the metric reads as in equation (<ref>), where γ = ∑_j=1^2nη^j⊗η^j.
If moreover a > 1, volume comparison techniques show that the limit is a coframe.
* If in addition, ∇ R_g=𝒪(e^-ar), then the family of 1-forms (η_r)_r⩾ 0 converges in 𝒞^1 topology, the limit η is of class 𝒞^1, and is contact.
The proof uses several estimates, and tedious computations involving many curvature terms.
* If a>3/2, then (η_r^j)_r⩾ 0 locally uniformly converges in 𝒞^1 topology, for any j∈{1,…,2n}.
Hence, γ is of class 𝒞^1.
* If φ_r = ℰ_r^*(J - g(·,)⊗) + g(·,)⊗), then (φ_r)_r⩾ 0 uniformly converges to a tensor φ of class 𝒞^1.
Its restriction to H= η gives the desired formally integrable almost complex structure J_H.
The very first step of the proof crucially relies on the fact that is parallel in the radial direction, and in fact, the equality ∇ J = 0 is used many times.
Note that the Kähler assumption is rather rigid: for instance, one has ∇ J = 0 if and only if the 2-form g(J·,·) is closed and J is formally integrable.
In this paper, we extend and improve the results of <cit.>.
First, the Kähler condition is removed: in fact, neither the closedness of g(J·,·) nor the formal integrability of J need to be met.
We instead consider an almost Hermitian manifold (M,g,J) whose almost complex structure J is only parallel at infinity, by imposing the condition ∇^k J_g = 𝒪(e^-ar), k∈{1,2}.
Second, we show that the strict pseudoconvexity of the boundary can be obtained with a > 1 instead of a > 3/2.
This sharper bound comes from deriving sharp geometric estimates in the direction of the contact structure.
In this context of this paper, the vector field is not radially parallel, and one cannot even initiate the above strategy as it stands.
The main trick is to prove the existence, under our assumptions, of a unit vector field E_0 on M̅ ̅∖̅ ̅K̅ that is radially parallel, and that satisfies E_0-_g = 𝒪(e^-ar).
This latter vector field is unique.
One can then consider a reference frame {E_0,…,E_2n} having nice properties, which we call an admissible frame (see Definition <ref> below), and try to mimic the above proof.
The counterpart is that the computations become longer and more involved; one also needs to show numerous extra estimates.
§ METRIC ESTIMATES
This section is dedicated to the derivation of the expansion near infinity of the metric g under the and conditions.
We first define the notion of admissible frames, which simplify future computations.
We then derive estimates on the asymptotic expansion of normal Jacobi fields, which turns out to be the main ingredients to show our results.
§.§ Admissible frames
We give a construction for some parallel orthonormal frames along radial geodesics in which later computations will be easier.
For v a vector field on , let V be the vector field on M̅∖̅ ̅K̅ obtained by the parallel transport of v along radial geodesics.
Finally, for r ⩾ 0, define β_r(v) = g(,V)|__r.
This defines a family of 1-forms (β_r)_r⩾ 0 on .
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the condition of order a > 0.
Then there exists a continuous 1-form β on such that
β_r - β = 𝒪_g_0(e^-ar).
Fix v a vector field on and r ⩾ 0.
Both and V are radially parallel, so that one has β_r(v)-β_0(v) = ∫_0^r g(,V) = ∫_0^r g(( J),V).
By the assumption, there exists C > 0 such that ∇ J_g ⩽ Ce^-ar.
The Cauchy-Schwarz inequality now implies that ∫_0^rg(( J), V)⩽∫_0^r ∇ J_g V_g⩽ C1-e^-ar/av_g_0.
Therefore, (β_r(v))_r⩾ 0 pointwise converges: let β(v) to be its pointwise limit.
It defines a pointwise linear form on the tangent spaces of , satisfying
|β(v)-β_r(v)|
= | ∫_r^∞ g(( J),V) |
⩽∫_r^∞|g(( J),V)|
⩽C/ae^-arv_g_0,
from which is derived equation (<ref>).
The convergence is thus uniform, and β is continuous.
We shall now show that β is nowhere vanishing.
For all r ⩾ 0, one has β_r_g_0 = 1 pointwise.
Indeed, for any v, Cauchy-Schwarz inequality implies that |β_r(v)| ⩽V_g = v_g_0.
Equality is reached for v = ι_r^-1(), where ι_r T→ T_r is induced by the parallel transport along radial geodesics.
It follows that β_g_0 = 1 pointwise, and that β is nowhere vanishing.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the condition of order a > 0.
Let U⊂ be an open subset on which the continuous distribution β is trivialisable.
Let {e_0,…,e_2n} be an orthonormal frame on U such that β(e_0) > 0 and β(e_j) = 0 if j∈{1,…,2n}.
The associated admissible frame {E_0,…,E_2n} on the cone E(_+× U) is defined as the parallel transport of {e_0,…,e_2n} along the radial geodesics.
If {E_0,…,E_2n} is an admissible frame, then {,E_0,…,E_2n} is an orthonormal frame on the cone E(_+× U) whose elements are parallel in the radial direction even though they need not be differentiable in the directions that are orthogonal to .
In the following, we will often refer to admissible frames without mentioning the open subset U⊂ on which they are defined.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the condition of order a > 0.
Let {E_0,…,E_2n} be an admissible frame.
Then β(e_0) = 1.
One has 1 = _g^2 = ∑_j=0^2nβ_r(e_j)^2.
The result follows by taking the limit as r →∞.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the condition of order a > 0.
Let {E_0,…,E_2n} be an admissible frame and δ be the Kronecker symbol.
Then
* g(,E_j) - δ_0j = 𝒪(e^-ar) for j∈{0,…,2n},
* E_0 - = 𝒪_g(e^-ar).
The first point is a consequence of the equality g(,E_j)=β_r(e_j) and of equation (<ref>).
For the second point, notice that
E_0- = ∑_j=0^2ng(E_0-,E_j)E_j = ∑_j=0^2n(δ_0j- g(,E_j))E_j,
from which is derived the claimed estimate.
One easily shows that the vector field E_0 is the unique unit vector field X on E(_+× U) such that X = 0 and g(X,) = 1 + o(1).
If (M,g,J) is Kähler (if ∇ J = 0), then = 0, and thus E_0 =.
In this specific case, admissible frames can be chosen to be smooth, and correspond to the radially parallel orthonormal frames defined in <cit.>.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and conditions of order a > 0.
Let {E_0,…,E_2n} be an admissible frame.
Then
* (,E_0) + 1 = 𝒪(e^-ar),
* (,E_j) + 1/4 = 𝒪(e^-ar) for j ∈{1,…,2n},
* R(,E_i,,E_k) = 𝒪(e^-ar) for any i ≠ j ∈{0,…,2n}.
We prove the first point, the other being shown similarly.
One readily verifies from the definition of R^0 that R^0(,,,) = -1, and therefore, it holds that
(,E_0)
= R^0(, + (E_0-), , + (E_0-))+ (R-R^0)(,E_0,,E_0)
= -1 + 2R^0(,E_0-,E_0,)
+ R^0(,E_0-,,E_0-)
+ (R-R^0)(,E_0,,E_0).
The definition of R^0 (see equation (<ref>)) yields R^0_g ⩽3/2, and the result follows from the assumption and from the second point of Corollary <ref>.
§.§ Associated coframes and normal Jacobi fields estimates
Recall that for r ⩾ 0, the diffeomorphism ℰ_r→_r is defined by ℰ_r(p) = ℰ(r,p).
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold with essential subset K.
Assume that it satisfies the condition of order a > 0.
Let {E_0,…,E_2n} be an admissible frame on the cone E(_+× U).
The associated coframe {η^0_r,…,η^2n_r}_r ⩾ 0 on U is defined by
η^0_r = ℰ_r^* (e^-r g(·,E_0)) and η^j_r = ℰ_r^*(e^-r/2g(·,E_j))
if j∈{1,…,2n}.
In any admissible frame, the normal Jacobi field Y_v associated to the vector field v on reads
Y_v = η^0_r(v) e^r E_0
+ ∑_j=1^2nη^j_r(v) e^r/2E_j.
Applying twice the differential operator to this last equality, one has
( Y_v) = (^2 η^0_r(v)+ 2η^0_r(v) + η^0_r(v) )e^r E_0
+ ∑_j=1^2n(^2η^j_r(v) + η^j_r(v) + 1/4η^j_r(v) )e^r/2E_j.
Recall that radial Jacobi fields are actual Jacobi fields, which means that they satisfy the second order linear differential equation ( Y_v) = -R(,Y_v).
An identification of the components of ( Y_v) in the given admissible frame shows that the coefficients {η^j_r(v)}_j ∈{0,…,2n} satisfy the differential system
^2η^0_r(v) + 2 η^0_r(v) =
∑_k=0^2n u^0_k η^k_r(v),
^2η^j_r(v) + 2η^j_r(v) =
∑_k=0^2n u^j_k η^k_r(v), j∈{1,…,2n},
where the functions {u^j_k}_j,k∈{0,…,2n} are defined by
u^j_k = -
(,E_0) + 1 if j=k=0,
e^-r/2R(,E_0,,E_k) if j=0, k≠ 0,
e^r/2 R(,E_k,,E_0) if j≠ 0, k=0,
R(,E_j,,E_k) if j,k ∈{1,…,2n}, j≠ k,
(,E_j) + 1/4 if j,k∈{1,…,2n}, j=k.
Proposition <ref> implies that one has the uniform estimates |u^j_k| = 𝒪(e^-(a-1/2)r).
Combining the proofs of <cit.>, relying on successive integrations, an application of Grönwall's Lemma, and a bootstrap argument, one obtains the following result.
The last claim relies on estimates on the growth of the volume (see <cit.>).
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and conditions of order a>1/2.
Let {η^0_r,…,η^2n_r}_r ⩾ 0 be the coframes associated to an admissible frame on U⊂.
Then there exists continuous 1-forms {η^0,…,η^2n} on U
∂_r η^0_r, η^0_r - η^0 =
𝒪_g_0(e^-ar) if 1/2 < a <3/2,
𝒪_g_0((r+1)e^-3/2r) if
a = 3/2,
𝒪_g_0(e^-3/2r) if
a > 3/2,
∀ j ∈{1,…,2n}, ∂_r η^j_r, η^j_r - η^j =
𝒪_g_0(e^-(a-1/2)r) if 1/2 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
If furthermore one assumes that a > 1, the family {η^0,…,η^2n} is a continuous coframe on U.
If a > 1/2, then η^j_r_g_0 is bounded independently of r, j, the choice of an admissible frame, and U.
For j∈{0,…, 2n} and r ⩾ 0, write η^j_r = η^j_0 + ∫_0^r η^j_r.
Notice that η^j_0_g_0 = 1.
Then by Proposition <ref>, η^j_r_g_0⩽η^j_0_g_0 + ∫_0^r η^j_r_g_0⩽ 1 + ∫_0^∞η^j_r_g_0 = 𝒪(1).
Recall that a normal Jacobi field Y_v satisfies Y_v = SY_v.
The following corollary is an immediate consequence of Proposition <ref>.
In any admissible frame, the normal Jacobi field Y_v associated to a vector field v on satisfies
Y_v = η^0(v) e^r E_0 + ∑_j=1^2nη^j(v)e^r/2 E_j +
𝒪_g(v_g_0 e^-(a-1)r) if 1/2 < a <3/2,
𝒪_g(v_g_0 (r+1)e^-r/2) if
a = 3/2,
𝒪_g(v_g_0 e^-r/2) if
a > 3/2,
and
SY_v = η^0(v) e^r E_0 + ∑_j=1^2n1/2η^j(v)e^r/2 E_j +
𝒪_g(v_g_0 e^-(a-1)r) if 1/2 < a <3/2,
𝒪_g(v_g_0 (r+1)e^-r/2) if
a = 3/2,
𝒪_g(v_g_0 e^-r/2) if
a > 3/2.
As a consequence, one has the global estimates Y_v, SY_v = 𝒪_g(v_g_0e^r).
If moreover, v is everywhere tangent to η^0, then Y_v, SY_v = 𝒪_g(v_g_0e^r/2).
Note that although the estimates of Proposition <ref> are not uniform in all directions, they contribute equally to the lower order term in equations (<ref>) and (<ref>) thanks to the remaining exponential factors.
§.§ Global consequences and metric estimates
We shall now highlight global consequences of the study conducted in Subsections <ref> and <ref>.
We then prove the first of our main results.
Assume that (M,g,J) satisfies the condition of order a > 0.
Then the local vector field e_0 defined in Definition <ref> defines a global continuous vector field on , independently of the construction of any admissible frame.
The 1-form β defined in Lemma <ref> is continuous and nowhere vanishing.
Hence, the distribution β⊂ T is a continuous distribution of hyperplanes.
It follows that its g_0-orthogonal complement L is a well-defined and continuous line bundle.
Notice that the restriction of β trivialises L.
It follows that e_0 is the unique section of L that is positive for β, and of unit g_0-norm.
This concludes the proof.
The family of 1-forms {η^0_r}_r ⩾ 0 is then globally defined on , independently of the choice of the admissible frame.
As a consequence, one has the following global version of Proposition <ref>.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and condition of order a > 1/2.
Then there exists a continuous 1-form η^0 on such that
∂_r η^0_r, η^0_r - η^0 =
𝒪_g_0(e^-ar) if 1/2 < a <3/2,
𝒪_g_0((r+1)e^-3/2r) if
a = 3/2,
𝒪_g_0(e^-3/2r) if
a > 3/2.
If furthermore one assumes that a > 1, then η^0 is nowhere vanishing.
The following Corollary is a straightforward application of the triangle inequality and of Corollary <ref>.
One has the following estimates
η^0_r ⊗η^0_r - η^0 ⊗η^0 =
𝒪_g_0(e^-ar) if 1/2 < a < 3/2,
𝒪_g_0((r+1)e^-3/2r) if a = 3/2,
𝒪_g_0(e^-3/2r) if a > 3/2.
From Gauss's Lemma, the Riemannian metric g reads as ℰ^*g = r ⊗ r + g_r, with (g_r)_r ⩾ 0 the smooth family of Riemannian metrics on defined by g_r = ℰ_r^* g.
By construction, the first term that appears in the asymptotic expansion of the metric g near infinity is e^2rη^0 ⊗η^0.
For r⩾ 0, γ_r is defined as γ_r = e^-r( g_r - e^2rη^0_r ⊗η^0_r).
By definition, (γ_r)_r⩾ 0 is a family of symmetric 2-tensors on .
Let {η^0_r,…,η^2n_r}_r ⩾ 0 be the coframes associated to an admissible frame {E_0,…,E_2n}.
Then locally, γ _r = ∑_j=1^2nη^j_r⊗η^j_r.
Consequently, γ_r is positive semi-definite, and is positive definite on η^0_r, for any r ⩾ 0.
The following proposition shows that (γ_r)_r ⩾ 0 converges to some tensor that shares similar properties.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, and admitting an essential subset K. Assume that it satisfies the and conditions of order a > 1/2.
Then there exists a continuous positive semi-definite symmetric 2-tensor γ on , which we call the Carnot metric, such that
γ_r - γ =
𝒪_g_0(e^-(a-1/2)r) if 1/2 < a < 3/2,
𝒪_g_0((r+1)e^-r) if a = 3/2,
𝒪_g_0(e^-r) if a > 3/2.
If furthermore one assumes that a > 1, then γ is positive definite on η^0.
For r ⩾ 0, one has g_r = e^2rη^0_r⊗η^0_r + e^r γ_r.
Let {η^0_r,…,η^2n}_r ⩾ 0 be the coframes associated with an admissible frame.
Locally, one can express γ_r as γ_r = ∑_j=1^2nη^j_r⊗η^j_r.
Therefore, (γ_r)_r ⩾ 0 converges pointwise to a limit we call γ which is locally given by ∑_j=1^2nη^j⊗η^j.
In addition, one has the local expression
γ_r - γ = ∑_j=1^2nη^j_r⊗ (η^j_r-η^j) + (η^j_r-η^j) ⊗η^j.
The global estimates (<ref>) now follow from the triangle inequality and from an application of Proposition <ref> and Corollary <ref>.
As a consequence, γ is a continuous symmetric positive semi-definite 2-tensor.
If a > 1, then {η^0,…,η^2n} is a coframe (Proposition <ref>), and γ is hence positive definite on η^0.
The previous study implies the following comparison between quadratic forms.
If a > 1, there exists a constant λ > 1 such that for all r ⩾ 0, the comparison between quadratic forms 1/λ e^rg_0 ⩽ g_r ⩽λ e^2r g_0 holds.
For r ⩾ 0, η^0_r ⊗η^0_r and γ_r are positive symmetric 2-tensors.
Define q_r = η_r^0⊗η_r^0 + γ_r, which is a Riemannian metric on .
From g_r = e^2rη^0_r ⊗η^0_r + e^r γ_r, one readily checks that
∀ r ⩾ 0, e^r q_r ⩽ g_r ⩽ e^2rq_r.
According to Propositions <ref> and <ref>, q_r uniformly converges to the continuous Riemannian metric q_∞ = η^0 ⊗η^0 + γ as r→∞.
Let S^g_0 be the unit sphere bundle of (,g_0), which is compact by compactness of .
The map (r,v) ∈ [0,∞]× S^g_0↦ q_r(v,v)∈ (0,∞) is then continuous on the compact space [0,∞]× S^g_0.
Therefore, there exists λ > 1 such that for all r⩾ 0, 1/λ⩽ q_r ⩽λ on S^g_0.
The result now follows from equation (<ref>) and from the homogeneity of quadratic forms.
We shall now show the first of our main results.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and assumptions of order a > 1/2.
Then on , there exists a continuous 1-form η^0 and a continuous positive semi-definite symmetric 2-tensor γ, such that in the normal exponential map E, the Riemannian metric g reads
ℰ^*g = r ⊗ r + e^2rη^0 ⊗η^0
+ e^r γ +
𝒪_g_0(e^(2-a)r)
if 1/2 < a < 3/2,
𝒪_g_0((r+1)e^r/2)
if a = 3/2,
𝒪_g_0(e^r/2)
if a > 3/2.
If furthermore one assumes that a > 1, then η^0 is nowhere vanishing, and γ is positive definite on the distribution of hyperplanes η^0.
Let (η^0_r)_r ⩾ 0, (γ_r)_r ⩾ 0 and their limits η^0 and γ be given by
Propositions <ref> and <ref>.
By construction, one has
ℰ^*g = r ⊗ r + e^2rη^0_r ⊗η^0_r + e^r γ_r
= r ⊗ r + e^2rη^0 ⊗η^0 + e^r γ + ε_r,
with ε_r = e^2r(η^0_r ⊗η^0_r - η^0 ⊗η^0) + e^r (γ_r - γ).
Estimates (<ref>) now follow from Corollary <ref> (estimates on η^0_r⊗η^0_r - η^0⊗η^0)
and Proposition <ref> (estimates on γ_r-γ).
Ultimately, if a > 1, the last claim follows from Propositions <ref> (η^0 is nowhere vanishing) and <ref> (γ is positive semi-definite, positive definite on η^0).
Setting g = ℰ_*( r⊗ r + e^2rη^0⊗η^0 + e^r γ) on M̅∖̅ ̅K̅, Corollary <ref> shows that estimates (<ref>) read
g - g =
𝒪_g(e^-(a-1)r)
if 1/2 < a < 3/2,
𝒪_g((r+1)e^-r/2)
if a = 3/2,
𝒪_g(e^-r/2)
if a > 3/2.
If η^0 were a contact form and γ a Carnot metric on its kernel distribution, then g would be asymptotically complex hyperbolic in the sense of <cit.>.
§.§ Estimates on the shape operator
Before we conclude this section, we give another consequence of the previous study: we derive asymptotic estimates on the shape operator S.
First, we introduce a natural vector field ξ_0, which is closely related to S.
The vector fields (ξ_0^r)_r ⩾ 0 on are defined as ξ_0^r = ℰ_r^* (e^r E_0).
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Then there exists a continuous vector field ξ_0 on such that
ξ_0^r - ξ_0 =
𝒪_g_0(e^-(a-1/2)r) if
1 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
It is uniquely characterised by the fact that η^0(ξ_0) = 1 and γ(ξ_0,ξ_0) = 0.
Define g̅_0 = η^0⊗η^0 + γ, which is a continuous Riemannian metric on according to Theorem <ref>.
Consider the continuous line bundle L̅ = (η^0)^⊥_g̅_0 on .
The restriction of η^0 trivialises L̅, which thus has a continuous nowhere vanishing section ξ.
Define ξ_0 = ξ/η^0(ξ), which is continuous by construction.
Let {η^0,…,η^2n} be the limit coframe associated with any admissible frame.
Then η^0(ξ_0) = 1 and η^j(ξ_0) = 0 for j∈{1,…,2n}.
In particular, ξ_0 is uniquely characterised by the relations η^0(ξ_0)=1 and γ(ξ_0,ξ_0)=∑_j=1^2nη^j(ξ_0)^2 = 0.
Notice that for j∈{1,…,2n} and r ⩾ 0, one has
η^j_r(ξ_0 - ξ_0^r) = η^j_r(ξ_0^r) - η^j_r(ξ) = δ^j_0 - η^j_r(ξ_0) = η^j(ξ_0) - η^j_r(ξ_0)= (η^j-η^j_r)(ξ_0),
where δ stands for the Kronecker symbol.
Corollary <ref> yields the existence of a constant c > 0 such that ξ_0^r - ξ_0_g_0⩽ c e^-r/2Y_(ξ_0^r - ξ_0)_g for all r ⩾ 0.
The triangle inequality together with equation (<ref>) now yield
Y_(ξ_0^r - ξ_0)_g ⩽(e^r η^0-η^0_r_g_0 + e^r/2∑_j=1^2nη^j-η^j_r_g_0) ξ_0_g_0.
Estimates (<ref>) now follow from the estimates of Proposition <ref>, together with the fact that ξ_0_g_0 is uniformly bounded by continuity of ξ_0 and compactness of .
Fix an admissible frame on U⊂.
If ξ_j^r = ℰ_r^* (e^r/2E_j) and if {ξ_0,…,ξ_2n} is the dual frame of {η^0,…,η^2n}, a similar study shows that
∀ j ∈{1,…,2n}, ξ_j - ξ_j^r =
𝒪_g_0(e^-(a-1/2)r) if
1 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
The constants involved in the upper bounds are independent of the choice of the admissible frame and of U.
It relies on the fact that one can uniformly bound ξ_j_g_0 if j∈{1,…,2n}, for instance, as an application of Corollary <ref>.
For v a vector field on , the associated normal Jacobi fields Y_v satisfies Y_v = SY_v.
It follows from equation (<ref>) that in an admissible frame, one has
SY_v = (η^0_r(v) + η^0_r(v) )e^r E_0
+ ∑_j=1^2n(η^j_r(v) + 1/2η^j_r(v) )e^r/2E_j.
For r ⩾ 0, consider the pull-back S_r = ℰ_r^*S of the shape operator S through the diffeomorphism ℰ_r →_r.
It is well defined since S leaves stable the tangent bundle of the level hypersurfaces _r.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and conditions of order a > 1/2.
Then the family (S_r)_r ⩾ 0 satisfies the estimates
S_r - 1/2( + η^0_r ⊗ξ_0^r) =
𝒪_g_0(e^-(a-1/2)r) if 1/2 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2,
In particular, if a > 1, then S_r r →∞⟶1/2( + η^0 ⊗ξ_0), and one can substitute η^0_r⊗ξ_0^r with η^0 ⊗ξ_0 in estimates (<ref>).
Let v be a vector field on .
It follows from Proposition <ref> and from Corollary <ref> that
SY_v -1/2(Y_v + η^0_r(v)e^rE_0) = 𝒪_g(v_g_0 e^-(a-1)r) if 1/2 < a <3/2,
𝒪_g(v_g_0 (r+1)e^-r/2) if
a = 3/2,
𝒪_g(v_g_0 e^-r/2) if
a > 3/2,
By the very definition of S_r, ξ_0^r and g_r, it follows that
S_r-1/2( + η^0_r⊗ξ_0^r)_g_r =
𝒪(e^-(a-1)r) if 1/2 < a <3/2,
𝒪((r+1)e^-r/2) if
a = 3/2,
𝒪(e^-r/2) if
a > 3/2,
Finally, Corollary <ref> implies that
S_r - 1/2( + η^0_r ⊗ξ_0^r)
= 𝒪_g_0(e^-r/2S_r - 1/2( + η^0_r ⊗ξ_0^r) _g_r),
and estimates (<ref>) now follow.
If a > 1, then estimates on η^0-η^0_r_g_0 (Proposition <ref>)
and on ξ_0-ξ_0^r_g_0 (Proposition <ref>), together with the triangle inequality, show that one can replace η^0_r⊗ξ_0^r with η^0⊗ξ_0 in estimates (<ref>).
This concludes the proof.
In the complex hyperbolic space, the shape operator of a geodesic sphere of radius r, with outward unit normal ν, is given by S = (r)_ Jν + 1/2(r/2) _{ν,Jν}^⊥.
Proposition <ref> implies that the local extrinsic geometry of the level hypersurfaces _r is then asymptotic to that of horospheres in the complex hyperbolic space.
§ THE ALMOST COMPLEX STRUCTURE
This section is dedicated to prove the existence of a natural almost complex structure J_0 on the distribution of hyperplanes H_0 = η^0, obtained as the restriction of a naturally defined tensor φ on .
The ambient almost complex structure J does not leave stable the ambient distribution of hyperplanes {}^⊥.
Consider the orthogonal projection π T M̅∖̅ ̅K̅→ T M̅∖̅ ̅K̅ onto {}^⊥.
Define Φ to be the field of endomorphisms on M̅∖̅ ̅K̅ defined by Φ = π J π.
Since π and J have unit norms, then Φ_g ⩽ 1.
Formally, one has π = - g(,·) ⊗, and Φ then reads Φ = J + g(·,) ⊗ - g(·,)⊗.
Assume that (M,g,J) satisfies the condition of order a > 0.
For any admissible frame {E_0,…,E_2n} and any vector fields X and Y, one has:
* g(Φ X,Φ Y) = g(X,Y) - g(X,)g(Y,) - g(X,)g(Y,),
* Φ(E_0) = 𝒪_g(e^-ar),
* Φ(E_j) - _j = 𝒪_g(e^-ar) if j∈{1,…,2n}.
The first point is a straightforward computation.
To prove the second point, note that Φ() = 0, so that Φ(E_0)_g = Φ(E_0-)_g ⩽E_0-_g.
The result follows from Corollary <ref>.
Finally, by the very definition of Φ, Φ(E_j)=_j - g(E_j,), and the last point follows from Corollary <ref>.
The tensor Φ leaves stable the tangent distribution {,}^⊥.
Therefore, one can pull it back through the family of diffeomorphisms (ℰ_r)_r⩾ 0.
The family of endomorphisms (φ_r)_r ⩾ 0 is defined by φ_r = ℰ_r^*Φ for r ⩾ 0.
Recall that (S_r)_r ⩾ 0 is the family of endomorphisms ℰ_r^*S induced by the shape operator.
Assume that (M,g,J) satisfies the and assumption of order a > 1.
Then the following estimates hold:
* φ_rξ_0^r = 𝒪_g_0(e^-(a-1/2)r).
* φ_r = 𝒪_g_0(1),
* η^0_r∘φ_r = 𝒪_g_0(e^-ar),
* γ_r(φ_r·,φ_r·) - γ_r = 𝒪_g_0(e^-(a-1)r),
* φ_r S_r - S_r φ_r =
𝒪_g_0(e^-(a-1/2)r) if
1 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
We first show the first point.
From Corollary <ref>, there exists c > 0 such that for r ⩾ 0, φ_rξ_0^r_g_0⩽ c Φ (e^rE_0)_g e^-r/2 = cΦ (E_0)_g e^r/2.
The result now follows from Lemma <ref>
Let us now focus on the second point.
Let v be a vector field on .
Corollary <ref> states that there exists c>0 such that φ_rv_g_0⩽ c Φ(Y_v)_g e^-r/2,
for all r ⩾ 0.
The result follows from the fourth point of Lemma <ref>.
For the third point, let v be a vector field on .
In an admissible frame, one has Φ(Y_v) = η^0_r(v) e^r Φ(E_0) + e^r/2∑_j=1^2nη^j_r(v) Φ(E_j).
It then follows that
(η^0_r∘φ_r)(v) = η^0_r(v) g(Φ(E_0),E_0) + e^-r/2∑_j=1^2nη^j_r(v) g(Φ(E_j), E_0).
Notice that Φ has range in {}^⊥, so that g(Φ(E_j), E_0)) = g(Φ(E_j), E_0-) for all j∈{0,…,2n}.
Recall that Φ_g ⩽ 1 and that E_j_g=1 for all j∈{0,…,2n}.
The triangle inequality now yields
η^0_r∘φ_r_g_0⩽ (η^0_r_g_0
+ e^-r/2∑_j=1^n η^j_r_g_0) E_0-_g
for all r ⩾ 0.
The result follows from Corollary <ref> (estimates on E_0-) and from Corollary <ref> (uniform bounds on {η^j_r_g_0}_j ∈{0,…,2n}).
Let us now consider the fourth point.
Let u and v be vector fields on , and fix r ⩾ 0.
By Lemma <ref>, one has
g_r(φ_ru,φ_rv) = g(Φ Y_u,Φ Y_v) = g(Y_u,Y_v) - g(Y_u,)g(Y_v,).
Cauchy-Schwarz inequality now yields
g_r(φ_ru,φ_rv) = g_r(u,v) - e^2rη^0_r(u)η^0_r(v) + 𝒪(Y_u_gY_v_gE_0-_g).
It follows from Corollaries <ref> and <ref>, and from the very definition of γ_r, that
g_r(φ_r·,φ_r·) = e^rγ_r + 𝒪_g_0( e^(2-a)r).
Therefore, e^2r(η^0_r∘φ_r)⊗(η^0_r∘φ_r) + e^r γ_r(φ_r·,φ_r·) = e^r γ_r + 𝒪_g_0(e^(2-a)r).
From the preceding point, one has e^2r(η^0_r∘φ_r)⊗(η^0_r∘φ_r) = 𝒪_g_0(e^(2-2a)r), from which is deduced that γ_r(φ_r·,φ_r·) = γ_r + 𝒪_g_0(e^-(a-1)r)
This concludes the proof of the fourth point.
Finally, let us prove the last point.
Write S_r = S_r - 1/2( + η^0_r ⊗ξ_0^r) + 1/2( + η^0_r ⊗ξ_0^r), for r ⩾ 0.
By the triangle inequality, one has
φ_r S_r - S_r φ_r _g_0 ⩽ 2 φ_r_g_0S_r - 1/2( + η^0_r ⊗ξ_0^r)_g_0
+1/2(η^0_r_g_0φ_rξ_0^r_g_0 + η^0_r∘φ_r_g_0ξ_0^r_g_0).
The result now follows from uniform bounds on η^0_r_g_0 and ξ_0^r_g_0 (by uniform convergence), the estimates on S_r - 1/2( + η^0_r ⊗ξ_0^r) (Proposition <ref>),
and the estimates on φ_r, η^0_r∘φ_r,
and φ_r ξ_0^r, given by the three first points.
We are now able to prove that the family (φ_r)_r ⩾ 0 converges to a continuous field of endomorphisms, provided that a > 1.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Then there exists a continuous field of endomorphisms φ on such that
φ_r - φ =
𝒪_g_0(e^-(a-1/2)r) if
1 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
In addition, φ satisfies:
* η^0∘φ = 0 and φξ_0 = 0,
* γ(φ·,φ·) = γ,
* φ^2 = - + η^0 ⊗ξ_0 and φ^3 = -φ.
Let us first show the existence of φ.
The proof goes in two steps.
We first derive a differential equation for (φ_r)_r ⩾ 0.
Let X be a vector field on M̅∖̅ ̅K̅.
Then
( J)X = [,JX] - J[,X]
= ((JX) - ∇_JX) - J( X - ∇_X)
= ( J) X + J X - S(JX) - J X + J(SX)
= JSX - SJX + ( J)X.
It follows that J = JS - SJ + J.
Recall that Φ = π J π, where π = - g(,·)⊗ is the orthogonal projection onto {}^⊥.
It is a standard fact that g = 2g(S·,·).
Moreover, S = = 0.
It follows that π = 0, and consequently, that Φ = π (JS - SJ + J) π.
Note that the eigenspaces of the projector π are π = and (π - ) = {}^⊥, which are both left stable by the shape operator S.
Hence, S commutes with π, from which is derived that that Φ = Φ S - S Φ + π ( J) π.
Define ψ_r = ℰ_r^*(π ( J) π), so that one has φ_r = φ_r S_r - S_r φ_r + ψ_r.
A direct application of the assumption and Corollary <ref> yields ψ_r= 𝒪_g_0(e^-(a-1/2)r).
Therefore, it follows from Lemma <ref> that
φ_r =
𝒪_g_0(e^-(a-1/2)r) if 1/2 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
Consequently, (φ_r)_r ⩾ 0 uniformly converges to some continuous tensor φ, which satisfies the inequality φ_r - φ_g_0 = ∫_r^∞φ_r_g_0⩽∫_r^∞φ_r_g_0 for all r ⩾ 0.
This implies estimates (<ref>).
Let us now establish the claimed properties satisfied by φ.
The first two points are immediate consequences of Lemma <ref>.
We thus focus on the last claim.
One easily checks that Φ satisfies the equality
Φ^2 = - + g(·,) ⊗ + g(·,) ⊗.
Hence, one has φ_r^2 = - + η^0_r ⊗ξ_0^r + ϵ_r, for all r ⩾ 0, where ϵ_r = ℰ_r^*(g(·, - E_0) ⊗ + g(·,E_0)⊗ ( - E_0)).
As usual, Corollary <ref> yields that
ϵ_r_g_0 = 𝒪(e^r/2E_0-_g) = 𝒪(e^-(a-1/2)r), where the last equality is due to Corollary <ref>.
The first part of the result now follows from the convergence of (η^0_r)_r ⩾ 0 and of (ξ_0^r)_r⩾ 0 when a > 1.
The second part of the claim is a consequence of the first point.
Proposition <ref> implies that when a > 1, (,η^0,φ,ξ_0) is an almost contact manifold (see <cit.> for an introduction to this notion).
In particular, φ induces an almost complex structure on the distribution of hyperplanes H_0 = η^0.
The study conducted in this section finally implies the second of our main Theorems.
Let (M,g,J) be a complete, non-compact almost Hermitian manifold of dimension greater than or equal to 4
Assume that M satisfies the and conditions of order a > 1.
Let η^0 and γ be given by Theorem <ref>, and let φ be defined as in Proposition <ref>.
The restriction J_0= φ|_H_0 of φ to the hyperplane distribution H_0 = η^0 then induces an almost complex structure, and γ^0=γ|_H_0× H_0 is J_0-invariant.
§ HIGHER REGULARITY
This section is dedicated to show that under the stronger conditions and of order a>1, the tensors η^0, γ, and φ defined previously gain in regularity.
As a consequence, we highlight a strictly pseudoconvex CR structure related to the expansion of the metric near infinity.
§.§ Order one estimates
We first provide several estimates that will be useful in the following study.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the condition of order a > 1/2.
Let u and v be vector fields on .
Let V be the parallel transport of v along radial geodesics.
Then ∇_Y_u V = 𝒪_g(u_g_0v_g_0 e^r).
Since V = 0 and [,Y_u]=0, one has (∇_Y_uV) = -R(,Y_u)V.
Hence, Kato's inequality yields | ∇_Y_uV_g | ⩽R_g Y_u_g V_g almost everywhere.
Recall that R_g= 𝒪(1) (Remark <ref>) and that V_g = v_g_0.
Under the condition of order a > 1/2, one has Y_u_g = 𝒪(u_g_0e^r) (Corollary <ref>).
The result follows from a straightforward integration.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and conditions of order a > 1/2.
Then ∇_Y_u = 𝒪_g(u_g_0e^r).
Write ∇_Y_u = (∇_Y_uJ) + J SY_u.
Then ∇_Y_u_g ⩽ (∇ J_g+ S_g) Y_u_g, and the result follows from Lemma <ref>, the assumption and the estimates of Corollary <ref>.
Assume that (M,g,J) satisfies the and conditions of order a > 1/2.
Then ∇_Y_u() = 𝒪_g(u_g_0e^-(a-1)r).
Since = 0 and ∇_Y_u = SY_u, it follows that
∇_Y_u( (J)) = ∇_Y_u(( J))
= (∇_Y_u( J)) + ( J) ∇_Y_u
= (∇^2_Y_u,J) + (∇_∇_Y_u J) + ( J)∇_Y_u
= (∇^2_Y_u,J) + (∇_SY_uJ) + ( J)SY_u.
The result follows from Corollary <ref> (estimates on SY_u) and from the assumption.
Assume that (M,g,J) satisfies the and conditions of order a > 1/2.
Let π be the orthogonal projection onto {}^⊥.
For u and v vector fields on , one has:
* π((∇_Y_uS)Y_v) = 𝒪_g(u_g_0v_g_0e^3/2r).
* π(∇_Y_uY_v) = 𝒪_g((v_g_0+∇^g_0v_g_0)u_g_0e^3/2r).
We first consider the first point.
By Kato's inequality, and noticing that π = 0, one has the almost everywhere inequality π(∇_Y_uS)Y_v)_g ⩽π( ((∇_Y_uS)Y_u))_g.
The shape operator S satisfies the Riccati equation S = -S^2 - R(,·).
Moreover, one has π S = S π.
Direct computations using the equalities Y_v = SY_v and (SY_v) = -R(,Y_v) now yield
(π ((∇_Y_uS)Y_v))) = π SR(,Y_u)Y_v - π R(,Y_u)SY_v - π R(SY_u,Y_v)
-π R(,Y_v)SY_u - π (∇_Y_uR)(,Y_v) - S π (∇_Y_uS)Y_v
= ℜ - S(π ((∇_Y_uS)Y_v))),
where ℜ contains all the curvature terms.
From this is deduced the almost everywhere inequality (e^-rπ ((∇_Y_uS)Y_v))_g) ⩽ e^-rℜ_g + (S_g-1) (e^-rπ ((∇_Y_uS)Y_v))_g).
After a straightforward integration, Grönwall's Lemma yields
e^-rπ ((∇_Y_uS)Y_v))_g ⩽((∇^g_uS)v_g + ∫_0^r e^-sℜ_g s)exp(∫_0^r (S_g-1) s).
By tensoriality and compactness of , one has (∇^g_uS)v_g = 𝒪(u_g_0v_g_0).
Moreover, Lemma <ref> yields the estimate exp(∫_0^r (S_g-1) s) = 𝒪(1).
To conclude, it suffices to show that ℜ = 𝒪_g(u_g_0v_g_0e^3/2r).
The assumption of order a > 1/2 yields
ℜ = π SR^0(,Y_u)Y_v - π R^0(,Y_u)SY_v - π R^0(SY_u,Y_v)
-π R^0(,Y_v)SY_u + 𝒪_g( u_g_0v_g_0e^-(a-2)r).
A close look at the definition of R^0 (see equation (<ref>)) shows that the leading terms in ℜ_g are of the form cη^0(u)η^j(v)e^3/2r or cη^0(v)η^j(u)e^3/2r for c a constant and j ∈{1,…,2n}.
The result follows.
Let us now show the second point.
Similarly, Kato's inequality yields the almost everywhere inequality
π(∇_Y_uY_v)_g ⩽(π(∇_Y_uY_v))_g.
Straightforward computations, using that π = 0, that π and S commute, and that Y_v = SY_v, now yield the equality (π(∇_Y_uY_v)) = -π R(Y_u,Y_v) + π ((∇_Y_uS)Y_u) + S π (∇_Y_uY_v).
Hence, one has
(e^-rπ(∇_Y_uY_v)_g) ⩽ e^-rπ R(Y_u,Y_v)_g + e^-rπ((∇_Y_uS)Y_v)_g
+ (S_g-1) (e^-rπ(∇_Y_uY_v)_g) a.e.
The rest of the proof goes similarly to that of the first point, using the estimates derived on π((∇_Y_uS)Y_v)_g.
The main difference is that the initial data here is not tensorial in v, but instead is π (∇_uv)_g = ∇^g_0_uv_g_0⩽∇^g_0v_g_0u_g_0.
If one considers the whole vector field ∇_Y_uY_v instead, then one only has the estimates ∇_Y_uY_v_g = 𝒪((v_g_0+∇^gv_g)u_g_0e^2r).
Indeed, the radial component is given by g(∇_Y_uY_v,) = -g(SY_u,Y_v) ≃ -η^0(u)η^0(v)e^2r when η^0(u) and η^0(v) do not vanish.
§.§ Regularity of the admissible frames
We shall now show that under the and conditions of order a > 1, the vector field e_0, defined in Definition <ref>, is actually of class 𝒞^1.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Then the vector field e_0 is of class 𝒞^1; admissible frames can be chosen to have the same regularity.
It suffices to show that the 1-form β defined in Section <ref> is of class 𝒞^1.
To do so, we shall show that β(v) is a 𝒞^1 function for any 𝒞^1 vector field v.
We prove this later fact by showing that (u(β_r(v)))_r⩾ 0 uniformly converges for any 𝒞^1 vector fields u and v on .
Let u and v be such vector fields, and r ⩾ 0.
Then u(β_r(v)) = Y_u(g(,V)) = ∇_Y_u(g(,V)), where V is the parallel transport of v along radial geodesics.
Since [,Y_u] = 0 and V = 0, one has
(u (β_r(v))) = (∇_Y_u(g(,V))) = ∇_Y_u((g(,V))),
so that (u (β_r(v))) = g(∇_Y_u(()),V) + g((),∇_Y_uV).
It now follows that one has |(u (β_r(v)))| ⩽∇_Y_uV_g()_g + V_g∇_Y_u(())_g.
Recall that S_g = 𝒪(1) (Lemma <ref>), V_g = v_g_0, and Y_u_g = 𝒪(u_g_0e^r) (Corollary <ref>).
It now follows from Lemma <ref>, Lemma <ref>, and the assumption, that
(u (β_r(v))) = 𝒪(u_g_0v_g_0e^-(a-1)r).
Consequently, (u (β_r(v))) uniformly converges for any vector fields u and v.
This concludes the proof.
It what follows, we will need to differentiate expressions involving ∇_Y_uE_j in the radial direction, with Y_u a normal Jacobi field and E_j an element of an admissible frame.
At a first glance, this is a priori justified only if E_j is of class 𝒞^2.
One could prove such regularity by requiring the stronger condition ∇^3 J_g = 𝒪(e^-ar).
It turns out that one needs not assume this last condition, as a consequence of the fact that E_j is solution to the first order linear differential equation E_j=0.
Indeed, let {r,x^1,…,x^2n+1} be Fermi coordinates[That is, {x^1,…,x^2n+1} are coordinates on , and that if (x^1,…,x^2n+1) corresponds to p∈, then (r,x^1,…,x^2n+1) corresponds to ℰ(r,p)∈ M.], and write E_j = ∑_i=1^2n+1E_j^i ∂_i.
Then {E_j^i} are solutions to the ODE (E^i_j)' + ∑_k=1^2n+1E_j^kS_k^i = 0, with (S_k^i) the components of the shape operator S.
As a consequence, one can consider elements of the form (∇_Y_u E_j) even though E_j is only of class 𝒞^1.
In fact, one has (∇_Y_u E_j) = -R(,Y_u)E_j.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Let u be a vector field on .
Then
∇_Y_u(E_0 - ) = 𝒪_g(u_g_0e^-(a-1)r).
Let u be a vector field on , and {E_0,…,E_2n} be an admissible frame of class 𝒞^1.
Equation (<ref>) yields that
∇_Y_u(E_0-) = -∑_j=0^2n u(β_r(e_j)) E_j + ∑_j=0^2n (δ_0j - β_r(e_j)) ∇_Y_uE_j.
During the proof of Proposition <ref>, we have shown that (β_r)_r ⩾ 0 converges in 𝒞^1 topology.
Hence,
∀ j ∈{0,…,2n}, lim_r →∞ u (β_r(e_j)) = u ( lim_r →∞β_r(e_j)) = u(β(e_j)) = u(δ_0j) = 0.
Therefore, |u(β_r(e_j))| = |∫_r^∞ (u(β_r(e_j)))| ⩽∫_r^∞ | (u(β_r(e_j)))| for j ∈{0,…,2n} and r ⩾ 0.
It follows from equation (<ref>) that u(β_r(e_j)) = 𝒪(u_g_0e^-(a-1)r).
Moreover, by Corollary <ref>, one has |δ_0j-β_r(e_j)| = 𝒪(e^-ar).
Finally, Lemma <ref> yields ∇_Y_uE_j = 𝒪_g(u_ge^r).
The result now follows.
§.§ The contact form and the Carnot metric
We shall now show that if the and conditions of order a>1 are satisfied, then η^0 and γ|_H_0× H_0 are of class 𝒞^1 and that η^0(·,φ·) = γ.
In particular, η^0 is contact.
These results are analogous to <cit.>, although we give slightly different and considerably shorter proofs here.
The main difference is that we prove the 𝒞^1 convergence of elements of the form (η^j_r(v))_r⩾ 0, instead of 𝒞^0 convergence of elements of the form (ℒ_uη^j_r)_r⩾ 0.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Then η^0 is a contact form of class 𝒞^1.
Moreover, η^0(·,φ·) = γ, and the Reeb vector field of η^0 is ξ_0.
The proof is divided in three parts.
First, we show that η^0 is of class 𝒞^1.
Then we derive an expression for η^0(·,φ·), and deduce that η^0 is contact.
Finally, we show that ξ_0 is the Reeb vector field of η^0.
To show that η^0 is of class 𝒞^1, we show that for any vector field v, the function η^0(v) is of class 𝒞^1.
To do so, we show that for any other vector field u, (u(η^0_r(v)))_r ⩾ 0 uniformly converges on .
Let u and v be vector fields on .
Let f be the function on M̅∖̅ ̅K̅ defined by the expression f= e^r(u(η^0_r(v)) = Y_u(g(Y_v,E_0)) = ∇_Y_u(g(Y_u,E_0) ).
Then f is smooth in the radial direction.
Since [,Y_u]=0 and E_0=0, one has
f = (∇_Y_u ((g(Y_v,E_0))) = ∇_Y_u( (g(Y_v,E_0))) = ∇_Y_u(g( Y_v,E_0)).
Similarly, one has ^2f = ∇_Y_u(g(( Y_v),E_0)).
For Y_v is a Jacobi field, one has the equality ( Y_v) = -R(,Y_v), and it follows that ^2f = -∇_Y_u(R(,Y_v,,E_0)).
Notice that
R(,Y_v,,E_0) = R(,Y_v,,) + R(,Y_v,,E_0-)
= R^0(,Y_v,,) + R(,Y_v,,E_0-)
+ (R-R^0)(,Y_v,,).
One readily checks from the definition of R^0 that R^0(,Y_v,,) = -g(Y_v,), so that R^0(,Y_v,,) = -g(Y_v,E_0) - g(Y_v, - E_0).
Hence, it follows that
^2f - f = g(∇_Y_uY_v, -E_0) + g(Y_v,∇_Y_u(-E_0))
- (∇_Y_uR)(,Y_v,,E_0-) - R(SY_u,Y_v,,E_0-)
- R(,∇_Y_uY_u,,E_0-) - R(,Y_v,SY_u,E_0-)
- R(,Y_v,,∇_Y_u(E_0-)) - (∇_Y_u(R-R^0))(,Y_v,,)
- (R-R^0)(SY_u,Y_v,,) - (R-R^0)(,∇_Y_uY_v,,)
- (R-R^0)(,Y_v,SY_u,) - (R-R^0)(,Y_v,,∇_Y_u).
Note that the radial part of ∇_Y_uY_v plays no role here due to the symmetries of the Riemann curvature tensor, so that one can substitute ∇_Y_uY_v with π(∇_Y_uY_v) in this latter expression.
Recall that one has the following estimates:
* R, S = 𝒪_g(1) (Remark <ref> and Lemma <ref>),
* R-R^0,∇ R, ∇(R-R^0) = 𝒪_g(e^-ar) (condition and Remark <ref>),
* E_0- = 𝒪_g(e^-ar) (Corollary <ref>),
* Y_u,Y_v = 𝒪_g(u_g_0e^r) (Corollary <ref>),
* ∇_Y_u = 𝒪_g(u_g_0e^r) (Lemma <ref>),
* π(∇_Y_uY_v) = 𝒪_g((v_g_0+∇^g_0v_g_0)u_g_0e^3/2r) (Lemma <ref>),
* ∇_Y_u(E_0-) = 𝒪_g(u_g_0e^-(a-1)r) (Corollary <ref>).
Hence, the triangle inequality yields
^2f - f = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(2-a)r).
Define h = f - f, and notice that h + h = ^2f - f.
It now follows from equation (<ref>) that (e^rh) = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(3-a)r).
Therefore, one has
e^rh =
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(3-a)r) if 1 < a < 3,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)) if a=3,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0) if a > 3.
Notice that e^-rh = (e^-rf) = (u(η^0_r(v)) ).
Hence,
(u(η^0_r(v)) ) =
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-1)r) if 1 < a < 3,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)e^-2r) if a=3,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-2r) if a > 3.
Consequently, (u(η^0_r(v)))_r⩾ 0 uniformly converges as r→∞,
and η^0 is then of class 𝒞^1.
We shall now derive an expression for η^0(·,φ·), by computing the limit of η^0_r(·,φ_r·) as r →∞.
Let u and v be vector fields on .
For r ⩾ 0, it holds that
η^0_r(u,φ_rv) = u(η^0_r(φ_rv)) - (φ_rv)(η^0_r(u)) - η^0_r([u,φ_rv])
= e^-r( Y_u g(Φ Y_v,E_0) - (Φ Y_v)g(Y_u,E_0) - g([Y_u,Φ Y_v],E_0) )
= e^-r(g(Φ Y_v,∇_Y_uE_0) - g(Y_u,∇_Φ Y_vE_0)).
On the one hand, it holds that
g(Φ Y_v,∇_Y_uE_0) = g(Φ Y_v,∇_Y_u) + g(Φ Y_v,∇_Y_u(E_0-))
= g(Φ Y_v,JSY_u) + g(Φ Y_v,(∇_Y_uJ))+ g(Φ Y_v,∇_Y_u(E_0-))
= -g(JΦ Y_v,SY_u) + g(Φ Y_v,(∇_Y_uJ))+ g(Φ Y_v,∇_Y_u(E_0-)).
On the other hand, one has
g(Y_u,∇_Φ Y_vE_0) = g(Y_u,∇_Φ Y_v) + g(Y_u,∇_Φ Y_v(E_0-))
= g(Y_u,JSΦ Y_v) + g(Y_u, (∇_Φ Y_vJ)) + g(Y_u,∇_Φ Y_v(E_0-))
= -g(JY_u,SΦ Y_v) + g(Y_u, (∇_Φ Y_vJ)) + g(Y_u,∇_Φ Y_v(E_0-)).
It then follows from the assumption, Corollary <ref> and Corollary <ref> that
η^0_r(u,φ_rv) = e^-r(g(JY_u,SΦ Y_v) - g(JΦ Y_v,SY_u)) + 𝒪(u_g_0v_g_0e^-(a-1)r).
Fix {E_0,…,E_2n} an admissible frame.
From Corollary <ref> and Corollary <ref>, one has the estimate Y_v = η^0(v) e^r + ∑_j=1^2nη^j(v)e^r/2E_j + 𝒪_g(v_g_0e^-(a-1)r).
It now follows from Lemma <ref> that JΦ Y_v = -∑_j=1^2nη^j(v) e^r/2 E_j + 𝒪_g(v_g_0e^-(a-1)r).
Corollary <ref> now yields
g(JΦ Y_v,SY_u) = -e^r/2∑_j=1^2nη^j(v)η^j(u) + 𝒪(u_g_0v_g_0e^-(a-2)r).
Similarly, one shows that
g(JY_u,SΦ Y_v) = e^r/2∑_j=1^2nη^j(u)η^j(v) + 𝒪(u_g_0v_g_0e^-(a-2)r).
Recall the local expression γ = ∑_j=1^2nη^j⊗η^j.
Equations (<ref>), (<ref>) and (<ref>) now yield
η^0_r(u,φ_rv) = γ(u,v) + 𝒪(u_g_0v_g_0e^-(a-1)r).
By uniform convergence of the first derivatives of (η^0_r)_r⩾ 0, it follows that η^0(·,φ·) = γ.
Proposition <ref> hence shows that η^0 is non-degenerate on η^0.
In particular, η^0 is a contact form.
To conclude, let us show that ξ_0 is the Reeb vector field of η^0.
Since η^0(ξ_0) = 1, it remains to show that η^0(ξ_0,v) = 0 for all vector field v tangent to H_0.
Let v be such a vector field.
The image of φ being exactly H_0, there exists a vector field u on such that v = φ u.
By Proposition <ref>, γ is φ-invariant and φξ_0=0.
From the preceding point, η^0(·,φ·) = γ.
Hence, η^0(ξ_0,v) = η^0(ξ_0,φ u) = γ(ξ_0,u) = γ(φξ_0,φ u) = γ(0,φ u) = 0.
This concludes the proof.
Under the assumptions of Theorem <ref>, the distribution H_0 = η^0 is a contact distribution of class 𝒞^1.
The next result shows that under the assumptions of Theorem <ref>, the Carnot metric γ^0 on H_0 is of the same regularity.
The proof is very similar.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Then γ^0 = γ|_H_0× H_0 is of class 𝒞^1.
Let {E_0,…,E_2n} be an admissible frame of class 𝒞^1 defined on a cone E(_+× U), and fix j∈{1,…,2n}.
Let us first show that η^j is of class 𝒞^1 on the distribution H_0|_U.
To do so, we shall prove that (u(η^j_r(v)))_r ⩾ 0 locally uniformly converges on U for v tangent to H_0|_U and u any vector field on U.
Let u and v be such vector fields, and r ⩾ 0 be fixed.
Let f^j = e^r/2 u(η^j_r(v)) = ∇_Y_u(g(Y_v,E_j)), which is smooth in the radial direction.
Since [,Y_u] = 0 and E_j = 0, one has
^2 f^j = ((∇_Y_u(g(Y_v,E_j)))) = ∇_Y_u g(( Y_v),E_j),
and, for Y_v is a Jacobi field, one has ^2f^j = - ∇_Y_u(R(,Y_v,,E_j)).
One checks from the very definition of R^0 that R^0(,Y_v,,E_j) = -1/4g(Y_v,E_j) - 3/4g(Y_v,)g(E_j,).
Therefore, one has the equality
^2f^j - 1/4f^j = 3/4g(∇_Y_uY_v,)g(E_j,) + 3/4g(Y_v,∇_Y_u)g(E_j,)
+ 3/4g(Y_v,)g(∇_Y_uE_j,) + 3/4g(Y_v,)g(E_j,∇_Y_u)
- ∇_Y_u(R-R^0)(,Y_v,,E_j) - (R-R^0)(SY_u,Y_v,,E_j)
- (R-R^0)(,∇_Y_uY_v,,E_j) - (R-R^0)(,Y_v,SY_u,E_j)
- (R-R^0)(,Y_v,,∇_Y_uE_j).
As in the proof of Theorem <ref>, the radial component of ∇_Y_uY_v plays no role due to the symmetries of R, so that one can substitute this term with π(∇_Y_uY_v).
Moreover, g(E_j,) = β_r(e_j), where (β_r)_r ⩾ 0 is the family defined in Section <ref>.
Recall that one has the following estimates:
* R, S = 𝒪_g(1) (Remark <ref> and Lemma <ref>),
* R-R^0,∇ (R-R^0) = 𝒪_g(e^-ar), (condition and Remark <ref>),
* β_r(e_j) = 𝒪(e^-ar) (Corollary <ref>),
* Y_u = 𝒪_g(u_g_0e^r) and Y_v = 𝒪_g(v_g_0e^r/2) (Corollary <ref>),
* ∇_Y_uE_j = 𝒪_g(u_g_0e^r) (Lemma <ref>),
* ∇_Y_u = 𝒪_g(u_g_0e^r) (Lemma <ref>),
* π(∇_Y_uY_v) = 𝒪_g((∇^g_0u_g_0 + u_g_0)v_g_0e^3/2r)
(Lemma <ref>).
It follows from the triangle inequality that ^2 f^j - 1/4f^j = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-3/2)r).
Let h^j be the function defined by h^j = f^j - 1/2f^j.
Then h^j + 1/2h^j = ^2f^j - 1/4f^j, from which is derived that (e^r/2h^j) = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-2)r).
A straightforward integration now yields
e^r/2h^j =
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(2-a)r) if 1 < a < 2,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)) if a = 2,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0) if a > 2.
Notice that e^-r/2h^j = (e^-r/2f^j) = ( u(η^j_r(v))), from which is deduced that
( u (η^j_r(v))) =
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-1)r) if 1 < a < 2,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)e^-r) if a = 2,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-r) if a > 2.
In any case, ( u(η^j_r(v)))_r⩾ 0 locally uniformly converges.
As a consequence, η^j|_H_0|_U is of class 𝒞^1.
We immediately deduce from the local expression γ = ∑_j=1^2nη^j⊗η^j that γ^0=γ|_H_0× H_0 is of class 𝒞^1.
This concludes the proof.
With the stronger assumption a > 3/2, the same proof shows that for j∈{1,…,2n}, η^j is of class 𝒞^1 in all directions, and so is γ.
Indeed, in this case, on has to consider the estimate Y_v = 𝒪_g(v_g_0e^r) instead.
§.§ The almost complex structure
We shall now show that the almost complex structure J_0 defined on the 𝒞^1 distribution H_0 is of the same regularity, and that it is formally integrable.
We first remark that the local vector fields {ξ_1,…,ξ_2n} are of class 𝒞^1, although the Reeb vector field ξ_0 might only be continuous.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that (M,g,J) satisfies the and conditions of order a > 1.
Let {η^0,…,η^2n} be the local coframe associated to any admissible frame {E_0,…,E_2n}.
Let {ξ_0,ξ_1,…,ξ_2n} be its dual frame.
Then for j∈{1,…,2n}, ξ_j is a vector field of class 𝒞^1.
Throughout the proof of Theorem <ref>, we have shown that {η^1,…,η^2n} is a 𝒞^1 trivialisation of the 𝒞^1 vector bundle (H_0,).
Consequently, {ξ_1,…,ξ_2n} is a 𝒞^1 trivialisation of the vector bundle H_0.
We now show that under the condition of order a > 0, admissible frames can almost be chosen to be J-frames, in the following sense.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, and with essential subset K.
Assume that it satisfies the condition of order a > 0.
Then there exists an admissible frame {E_0,…,E_2n} such that
∀ j ∈{1,…,n}, _2j-1 - E_2j = 𝒪_g(e^-ar).
Let U⊂ be an open domain on which H_0 is trivialisable.
Let e_1 be a unit section of H_0|_U of class 𝒞^1, and let E_1 be its parallel transport along radial geodesics.
Consider the family of 1-forms β^1_r H_0|_U → defined by β^1_r(v) = g(V, _1)|__r,
where V is the parallel transport of v along radial geodesics.
The same study than that conducted for the proofs of Lemma <ref> and Proposition <ref> shows that under the condition of order a >1, there exists a nowhere vanishing 1-form β^1 on U, which is of class 𝒞^1, such that β^1_r - β^1_g_0 =𝒪(e^-ar).
Let e_2 be the unique 𝒞^1 section of H_0|_U such that e_2 ⊥^g_0β^1, e_2_g_0 = 1 and β^1(e_2) > 0.
Define E_2 to be its parallel transport along radial geodesics.
Similarly to Corollary <ref>, one shows that E_2-_1 = 𝒪_g(e^-ar).
The rest of the proof follows by induction.
We refer to such an admissible frame as a J-admissible frame.
We are now able to show the last Theorem of this section, exhibiting a strictly pseudoconvex CR structure at infinity.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at last 4, with essential subset K.
Assume that it satisfies the and condition of order a > 1.
Let J_0 be the almost complex structure on H_0 induced by φ.
Then J_0 is of class 𝒞^1, and is formally integrable.
In particular, (,H_0,J_0) is a strictly pseudoconvex CR manifold of class 𝒞^1.
Let {E_0,…,E_2n} be a J-admissible frame of class 𝒞^1, and {η^1,…,η^2n} and {ξ_1,…,ξ_2n} be the associated 𝒞^1 coframe and frame.
Then {,E_0,…,E_2n} is an orthonormal frame.
Since Φ() = Φ()= 0, one has Φ = ∑_j=0^2n g(·,E_j)⊗Φ(E_j).
It then follows from Lemma <ref> and Lemma <ref> that Φ = ∑_j=1^n g(·,E_2j-1)⊗ E_2j - g(·,E_2j)⊗ E_2j-1 + 𝒪_g(e^-ar).
Corollary <ref> now yields
φ_r = ∑_j=1^nη^2j-1_r⊗ξ_2j^r - η^2j_r⊗ξ_2j-1^r + 𝒪_g_0(e^-(a-1/2)r).
Taking the limit as r→∞ shows that φ = ∑_j=1^n η^2j-1⊗ξ_2j - η^2j⊗ξ_2j-1.
Therefore, the restriction J_0= φ|_H_0 has at least the same regularity as {η^1|_H_0,…,η^2n|_H_0} and {ξ_1,…,ξ_2n}.
It follows from Theorem <ref> and Lemma <ref> that J_0 is of class 𝒞^1.
Let us now show that J_0 is formally integrable.
Recall that γ|_H_0× H_0 is J_0-invariant, so that by <cit.>, it suffices to show that N_φ|_H_0× H_0 = η^0|_H_0× H_0⊗ξ_0,
where N_A stands for the Nijenhuis tensor of the field of endomorphisms A, defined by N_A(X,Y) = -A^2[X,Y] - [A X,AY] + A[A X,Y] + A[X,A Y].
Let u and v be any vector fields on .
Using the fact that ∇ is torsion-free, one first obtains N_Φ(Y_u,Y_v) = Φ(∇_Y_uΦ)Y_v - (∇_Φ Y_uΦ) Y_v - Φ(∇_Y_vΦ)Y_u + (∇_Φ Y_vΦ) Y_u.
Recall that Φ = J - g(·,)⊗ + g(·,)⊗.
Since ∇ g = 0, ∇ = S, Φ() = Φ()=0 and Y_u,Y_v ⊥, one has
Φ(∇_Y_uΦ)Y_v = g(Y_v,)Φ(SY_u) + Φ(∇_Y_u J)Y_v,
(∇_Φ Y_uΦ)Y_v = -g(Y_v,SΦ Y_u) + g(Y_v,JSΦ Y_u) + g(Y_v,)SΦ Y_u
+(∇_Φ Y_uJ)Y_v - g(Y_v,(∇_Φ Y_uJ)),
Φ(∇_Y_vΦ)Y_u = g(Y_u,)Φ(SY_v) + Φ(∇_Y_v J)Y_u, and
(∇_Φ Y_vΦ)Y_u = -g(Y_u,SΦ Y_v) + g(Y_u,JSΦ Y_v) + g(Y_u,)SΦ Y_v
+ (∇_Φ Y_vJ)Y_u - g(Y_u,(∇_Φ Y_vJ)).
Recall that Φ takes values in the distribution {}^⊥, which is involutive as the tangent field to the foliation (_r)_r ⩾ 0 of M̅∖̅ ̅K̅.
The definition of the Nijenhuis tensor then shows that N_Φ has range in {}^⊥.
Hence, the terms in the radial direction cancel out each others, and the remaining terms yield
N_ϕ(Y_u,Y_v) = (g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))
+ g(Y_v,)(Φ S Y_u - S Φ Y_u) - g(Y_u,)(Φ S Y_v - S Φ Y_v)
+ Φ((∇_Y_uJ)Y_v - (∇_Y_vJ)Y_u) - π((∇_Φ Y_uJ)Y_v) + π((∇_Φ Y_vJ)Y_u)
= (g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))E_0
+ g(Y_v,E_0)(Φ S Y_u - S Φ Y_u) - g(Y_u,E_0)(Φ S Y_v - S Φ Y_v)
+ (g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))(-E_0)
+ g(Y_v,-E_0)(Φ S Y_u - S Φ Y_u) - g(Y_u,-E_0)(Φ S Y_v - S Φ Y_v)
+ Φ((∇_Y_uJ)Y_v - (∇_Y_vJ)Y_u) - π((∇_Φ Y_uJ)Y_v) + π((∇_Φ Y_vJ)Y_u),
where π is the orthogonal projection onto {}^⊥.
From now, and until the rest of the proof, we assume that u and v are tangent to H_0.
Let r ⩾ 0, and note that N_φ_r = ℰ_r^* N_Φ.
The condition,
the uniform bound on S_g (Lemma <ref>),
estimates on E_0- (Corollary <ref>),
estimates on Y_u and Y_v (Corollary <ref>),
comparison between g_0 and g_r (Corollary <ref>),
and estimates on φ_r S_r - S_r φ_r (Lemma <ref>),
now yield the existence of α_1 > 0, depending on a, such that N_φ_r(u,v) = e^-r(g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))ξ_0^r + 𝒪_g_0(u_g_0v_g_0e^-α_1 r).
Similar calculations that the ones conducted to derive an expression for η^0_r(u,φ_rv) (see the proof of Theorem <ref>) show that there exists α_2 > 0 depending on a with
e^-r(g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v)) = η^0(u,v) + 𝒪(u_g_0v_g_0e^-α_2 r).
The 𝒞^1 convergence of (φ_r|_H_0)_r ⩾ 0 to φ|_H_0, and the 𝒞^0 convergence of (ξ_0^r)_r ⩾ 0 to ξ_0 finally imply that N_φ|_H_0 × H_0 = lim_r→∞ N_φ_r|_H_0 × H_0 = η^0|_H_0× H_0⊗ξ_0.
Consequently, J_0 is formally integrable.
The associated Levi-form η^0|_H_0× H_0(·,J_0·) coincides with γ|_H_0× H_0, and is thus positive definite.
Ultimately, (,H_0,J_0) is a strictly pseudoconvex CR manifold, which concludes the proof.
If M has dimension 4, then J_0 is an almost complex structure of class 𝒞^1 defined on a 2-dimensional vector bundle.
Its integrability is automatic in this specific case.
Similarly to Remark <ref>, under the stronger assumption a > 3/2, one shows that φ is of class 𝒞^1 in all directions.
§ THE COMPACTIFICATION
We conclude this paper by showing our main Theorem.
We first give a construction for M̅.
Fix K an essential subset and E its normal exponential map.
Let M(∞) be the visual boundary of (M,g), which is the set of equivalent classes [σ] of untrapped unit speed geodesic rays σ, where two rays σ_1 and σ_2 are equivalent if and only if the function t⩾ 0 ↦ d_g(σ_1(t),σ_2(t)) is bounded.
By <cit.>, is in bijection with M(∞) by the map p ↦ [E(·,p)].
Define M̅ = M ∪ M(∞).
The following map
[ ℰ̅ [0,1) × ⟶ M̅∖ K; (ρ, p) ⟼ ℰ(-lnρ, p) ∈ M∖ K if ρ > 0,
[ℰ(·,p)] ∈ M(∞) if ρ = 0, ]
is thus a bijection.
We endow M̅ with the structure of a compact manifold with boundary through this latter bijection.
This identifies M with the interior of M̅.
Note that if ρ > 0, then r = -lnρ is the distance to K for g in M.
A compactly supported modification of ρ in a neighbourhood of K in M provides a smooth defining function for the boundary ∂M̅ = M(∞).
By abuse of notation, we still denote it ρ.
Let η^0 be the contact form and γ be the Carnot metric given by Theorem <ref>.
Let H_0 be the associated contact distribution, and let J_0 be the integrable almost complex structure on H_0 given by Theorem <ref>.
We see these objects as defined on ∂M̅ through the diffeomorphism E̅(0,·) {0}×→∂M̅.
Then (∂M̅,H_0,J_0) is a strictly pseudoconvex CR manifold of class 𝒞^1 by Theorem <ref>.
Theorem <ref> and Remark <ref> show that the metric g has the desired asymptotic expansion (<ref>) near the boundary ∂M̅ = ρ^-1({0}).
Let us show that H_0 and J_0 are induced by a continuous ambient almost complex structure J̅.
To that end, we show that J extends continuously to the boundary.
Let {E_0,…,E_2n} be a J-admissible frame on a cone E(_+× U), and consider the frame {-∂_ρ, ξ̅_0,…,ξ̅_2n} on E̅((0,1)× U) defined by ξ̅_0 = E̅^*(ρ^-1E_0) and ξ̅_j = E̅^*(ρ^-1/2E_j) for j∈{1,…,2n}.
Notice that -∂_ρ = e^r on M∖ K.
Proposition <ref> and Remark <ref> show that {ξ̅_0,…,ξ̅_2n} extends continuously on the boundary E̅({0}× U), with limit {ξ_0,…,ξ_2n}.
The tangent bundle of M̅ at the boundary splits as TM̅|_∂M̅ = ∂_ρ⊕ T∂M̅ =∂_ρ⊕ξ_0 ⊕ H_0.
From the very definition of a J-admissible frame, one has
J(e^r ) - e^r E_0, J(e^r E_0) + e^r = 𝒪_g(e^-(a-1)r),
J(e^r/2E_2j-1) - e^r/2E_2j, J(e^r/2E_2j) + e^r/2E_2j-1 = 𝒪_g(e^-(a-1/2)r), j∈{1,…, n}.
It follows that in the continuous frame
{-∂_ρ,ξ̅_0,…,ξ̅_2n},
the matrix of J reads
([ 0 -1
1 - 0 0; 0 ⋱
0 -1
1 - 0 ])
+
([ 𝒪(ρ^a) 𝒪(ρ^a+1/2); ; 𝒪(ρ^a-1/2) a
𝒪(ρ^a) ])
,
where the top left block is of size 2× 2 and the bottom right block is of size 2n × 2n.
Hence, J extends uniquely as a continuous almost complex structure J̅ up to boundary.
In addition, J̅ satisfies
J̅(-∂_ρ) = ξ_0, J̅ξ_0 = ∂_ρ, J̅ξ_2j-1 = ξ_2j, and J̅ξ_2j = -ξ_2j-1, j∈{1,…,2n}.
It follows that J̅|_H_0 = J_0, and that H_0 = (T∂M̅)∩(J̅T∂M̅).
This concludes the proof.
Under the stronger assumption that a > 3/2, one can show that J̅ is of class 𝒞^1 up to the boundary in all directions (see Remark <ref>).
When (M,g,J) is Kähler, (that is, if ∇ J = 0), then (M̅,J̅) is a compact complex manifold with strictly pseudoconvex CR boundary.
|
http://arxiv.org/abs/2307.04945v1 | 20230711002121 | What do LLMs need to Synthesize Correct Router Configurations? | [
"Rajdeep Mondal",
"Alan Tang",
"Ryan Beckett",
"Todd Millstein",
"George Varghese"
] | cs.NI | [
"cs.NI",
"cs.PL"
] |
Submitted to HotNets 2023
What do LLMs need to Synthesize Correct Router Configurations?
Rajdeep Mondal
mailto:[email protected]@ucla.edu
Alan Tang
mailto:[email protected]@cs.ucla.edu
Ryan Beckett
mailto:[email protected]@microsoft.com
Todd Millstein
mailto:[email protected]@cs.ucla.edu
George Varghese
mailto:[email protected]@cs.ucla.edu
October 2023
=================================================================================================================================================================================================================================================================================================================================================================
We investigate whether Large Language Models (e.g., GPT-4) can synthesize correct router configurations with reduced manual effort. We find GPT-4 works very badly by itself, producing promising draft configurations but with egregious errors in topology, syntax, and semantics. Our strategy, that we call Verified Prompt Programming, is to combine GPT-4 with verifiers, and use localized feedback from the verifier to automatically correct errors.
Verification requires a specification and actionable localized feedback to be effective. We show results for two use cases: translating from Cisco to Juniper configurations on a single router, and implementing no-transit policy on multiple routers.
While human input is still required, if we define the leverage as the number of automated prompts to the number of human prompts, our experiments show a leverage of 10X for Juniper translation, and 6X for implementing no-transit policy, ending with
verified configurations.
§ INTRODUCTION
The limitations of his knowledge were as startling as its profundity – Hardy on Ramanujam <cit.>
While GPT-4 and other language models have shown great success in some domains (e.g., writing poems, passing the LSAT) they have been shown to have issues in other domains (e.g,. math, word puzzles) <cit.>. Language models have had some success in helping users write sequential programs in systems like CoPilot <cit.> and Jigsaw <cit.>. Our paper examines GPT-4's ability to write router configuration files, traditionally written by humans, that help tune routes and forwarding decisions and are critical for network operation. Our early experiments show that GPT-4 by itself is an
“idiot-savant", capable of brilliance but also making simple errors that an operator would be fired for making.
Critics have derided large language models (LLMs) as mere “stochastic parrots” <cit.>,because they produce text (say of a program) syntactically by predicting the next word based on a statistical model derived by training on a vast corpus of text found on the Internet. Our broader goal beyond synthesizing configs is to see whether LLMs can be combined with other programs (via APIs) to come closer to a “stochastic owl” that understands program semantics.
A plausible way to introduce semantics is to pair a LLM with an automatic verifier such as a SAT solver or a model checker. But verification is not a panacea. First, a verifier cannot prove correctness without a specification. In practice,
specifications are incomplete, so not all solutions are in fact acceptable to the user.
Second, for the verifier to automatically (with minimal human aid) interact with the LLM, the verifier must provide actionable feedback. We found it was easier for the LLM to correct itself using feedback from modular verification of components of a network (individual routers <cit.> or even route maps within a router <cit.>), rather than the network as a whole.
Figure <ref> shows the traditional method of pair programming (PP), embodied in systems like GitHub CoPilot <cit.>, where a human and an AI work together to author a program. In pair programming, the AI and the human form a tuple (A, H) and the human H manually checks for correctness of the output of the AI A and then manually issues correction prompts to A as shown in the figure. Such manual initial prompting and subsequent manual correction is often called prompt engineering.
Figure <ref> shows our alternate vision. In what we call Verified Prompt Programming (VPP), the AI, the human, and a verification suite (V) form a triple (A, H, V). The verification suite checks for correctness and automatically issues localized corrections.
V may abandon automatic correction after some number of trials, and the human must still correct manually. However, our hypothesis is that human effort is reduced as the output grows “closer” to a correct program.
Notice that there is a fast inner loop between V and A, where verifier results are automatically fed back to GPT-4. Since verifier feedback is often cryptic, we use simple code that we call a humanizer that converts the feedback to natural language prompts that are given to GPT-4. When V either determines the final configuration is correct or a time bound elapses, V sends the output back to the user as part of the slow manual loop. We examine a “reduced work hypothesis": that the work in the manual loop in Figure <ref> is significantly less than then the manual work in Figure <ref>
To quantify reduced human effort we introduce a simple measure that may be useful in other VPP contexts.
Define leverage as the ratio L of the number of automated prompts in Figure <ref> to the number of human prompts. Leverage measures the effect of the verifier suite, the
potential improvement in going from (A, H) to (A, V, H), keeping the language model A and the human H the same.
The reader may think the real leverage is the improvement from H to (A, H), or from H to (A, V, H). But this depends on the capability of the human H and is hard to make uniform or repeatable. Given how error-prone
(A, H) is for configurations, we find it more natural to measure the improvement caused by VPP. If a future LLM, say GPT-6, produces near-perfect configurations, leverage will decrease as there is less need
for automatic correction. Our definition also assumes every automatic correction in Figure <ref>
would otherwise be be done by a human in Figure <ref>.
The reduced work hypothesis is that the leverage L > 1 is high. Even if the leverage is low (say 1), since it is crucial that router configurations be correct, combining with a verifier seems critical. We were happy to find that in both use cases we did end with verified configurations via GPT-4: this was not obvious at the outset.
This vision and hypothesis extends beyond synthesizing configs to more general programs. Prompt programming (as opposed to prompt engineering) also reflects the use of APIs and automatically generated feedback prompts that may be more generally useful. However, network configs are a simple enough domain to experiment with. Further, there exist config verifiers (e.g., Campion <cit.> and Lightyear <cit.>) that provide actionable localized feedback.
For the rest of this paper, we examine the reduced manual work hypothesis and measure leverage for two use cases: translating a config on a single router from Cisco to Juniper syntax, and implementing a simple policy (“no transit") on a network of 7 routers. Section <ref> describes the system organization of a potential system we call . Section <ref> describes experiments with Cisco to Juniper translation, while Section <ref> describes implementing no-transit on multiple routers. Section <ref> compares our ideas to previous work and Section <ref> describes lessons learned.
§ SYSTEM ORGANIZATION
Figure <ref> is a refinement of the more general Verified Pair Programming (VPP) vision of Figure <ref> that we call . We emphasize we have not built . While we use GPT-4 we have not been able to access the APIs, and so manually simulated the API calls with prompts to ChatGPT. Our goal is not to demonstrate a working system but instead to explore GPT-4's ability to author configurations, as in the “Sparks of AGI" paper <cit.>.
The verification suite shown in Figure <ref> consists minimally of two verifiers, a syntax verifier (we used Batfish <cit.>) and a semantics verifier (we used different ones depending on the use case). For our second use case, we used a third verifier, a topology verifier (that we wrote in Python) as we found that GPT-4 sometimes missed announcing routes to neighbors. The user provides a precise natural language description of the context (topology, routers, interfaces) and the desired task (e.g. the Cisco config and a request to translate it to Juniper). GPT-4 output is
fed first to Batfish to check for syntax errors. sends GPT-4 feedback about erroneous lines, “humanized" in natural language (see Table <ref> for examples). The boxes labelled H in Figure <ref> correspond to the humanizer in Figure <ref>.
If all syntax errors are corrected (if too many syntax correction attempts occur, punts to the user), the output is passed to the semantics verifier. For our first use case, we use Campion <cit.> as a verifier. For our second use case we use Batfish's symbolic route map analysis as the verifier, asking it to verify local policies that together ensure the desired global policy, as in Lightyear <cit.>. Once again, the semantic verifier feedback is passed back, suitably humanized, to GPT-4. We found that GPT-4 would sometimes correct a semantic error while introducing a new syntax error, in which case we had to return to the syntax verifier. When the semantic verifier attests to a correct config or too many correction attempts transpire, returns to the human.
When works with multiple routers, we used another module called a “Modularizer" (Figure <ref>). For network configs, the idea is that we start with a precise machine readable (we use JSON) description of the “modules" which in our case is the topology and the connections. The Modularizer outputs a sequence of Natural Language Prompts that describes the topology to GPT-4 (e.g.,. Router R1 is connected to Router R2 via interface I1 at R1 and I2 at R2). The Modularizer can also take a general specification of local policies (e.g. edge routers add a specific community on ingress) and output a specific local specification for each router for the semantic verifier. The Composer puts back the
pieces (in our case in a folder for Batfish).
The modularizer follows the prompt engineering paradigm "Give the Model Time to
Think" <cit.>, which suggests breaking a complex prompt into simpler sub-prompts. Exploiting modularity is a way to do so for program synthesis. A second technique we find useful is what
is called single shot prompting <cit.>. We start each chat with a set of initial instruction
prompts (IIP) (Figure <ref>) loaded from a database for avoiding common mistakes. The IIP database can be built and added by experts over time. The I/O examples in Jigsaw <cit.> are an IIP, but our IIP contains instructions not examples.
§ CISCO TO JUNIPER TRANSLATION
We translate a Cisco configuration into an equivalent Juniper one using Verified Pair Programming. Batfish <cit.> is used to identify syntax errors. Campion <cit.> is used to detect and localize semantic differences that are used to refine the result. We show examples of the issues encountered, and discuss success and limitations of the approach.
§.§ Method
First, we provide the Cisco configuration, and the prompt: "Translate the configuration into an equivalent Juniper configuration." GPT-4 will produce a translation into Junos format that typically contains several errors and differences. We try to rectify these errors iteratively. First, we use Batfish <cit.> and Campion <cit.>to detect any errors or differences, and then use a simple script to produce a prompt for GPT-4 (the humanizer H in Figure <ref>) to try and fix this error. After GPT-4 attempts to resolve the issue, we ask it to print the entire configuration and check the result using verification tools again. For our experiment we focus exclusively on behavior related to routing and forwarding, ignoring potentially important features such as NTP servers.
To design the humanizer (i.e., automatically generate a prompt informing GPT-4 of the errors present), we distinguish four classes of configuration errors:
Syntax errors: Batfish produces parse warnings identifying relevant lines that do not use valid Juniper syntax.
Structural mismatch: This is when a component, connection, or named policy is present in the original configuration but not in the translation (or is present in the translation but not the original). For example, if the original configuration defined a BGP neighbor but there is no corresponding neighbor in the translation, there would be a mismatch in the routing connections. Similarly, if there are corresponding BGP neighbor definitions in both configurations, but one configuration has an import policy defined while the other does not, that would be a mismatch in the named policies present. Campion is able to detect this, and identify the missing or extra items.
Attribute differences: This is when a numerical attribute has a different value between the two configurations. An example is OSPF link cost difference between two corresponding interfaces. Campion detects these and prints the attributes for corresponding components.
Policy behavior differences: This is when a route map or access control list has a semantic difference. Route maps are used to filter incoming or outgoing route advertisements, so a difference would mean that that there are some route advertisements that are allowed by one router but not allowed by the other. Campion is able to detect these and output the relevant policy names, prefixes, and lines for these differences.
The distinction among errors helps for two reasons. First, syntax errors and structural mismatches have to be handled earlier since they can mask attribute differences and policy behavior differences. Second, different types of errors require different humanized prompts, while errors of the same type can reuse similar prompts. Each type of error can be summarized with a formulaic prompt with some fields inserted based on the error reported by Batfish or Campion.
Table <ref> shows the formulas and examples of generated prompts. Batfish parse errors and warnings can be reused as prompts for syntax errors. Prompts for structural mismatches and attribute differences are easily generated from the relevant components and attributes. Policy behavior differences are more difficult since it is not always clear how to describe the affected input space that is treated differently. We opt for the approach of giving an example prefix.
§.§ Experience and Results
We tried translating a Cisco configuration from the Batfish examples <cit.> into Juniper format. This configuration was short enough to fit within GPT-4 text input limits, but used non-trivial features including BGP, OSPF, prefix lists, and route maps. GPT-4's synthesized Juniper router configuration had several errors. In many cases, when an automatically generated prompt, similar to those in Table <ref>, is provided to GPT-4, it will produce a response fixing the issue. In some cases, GPT-4 is unable to edit the translation correctly, either applying no change or applying an erroneous one. This often requires manual intervention via more specific prompts in order to fix. Another problem is that GPT-4 can fix one error, but introduce new errors that were not previously there. Sometimes it even reintroduces errors that were previously fixed! However, we were able to reach a point where Campion and Batfish no longer produced errors.
Leverage: The entire cycle of prompts was around 2 human prompts and 20 automated prompts, for a leverage of 10X. Some of the 20 automatic prompt correction cycles included minor cycles for syntax correction not just at the start but also
after correcting semantic errors. To be clear, we “simulated” each API call by feeding our automatically generated prompts manually to GPT-4.
Table <ref> shows errors in the translation at some point and whether GPT-4 was able to fix them using an automatically generated prompt. In more detail:
Missing BGP local-as attribute: The translated BGP neighbor declarations did not include a local AS attribute. We label this a syntax error since it produces a parse warning.
Missing/extra BGP routing policy: An import or export policy is used for a BGP neighbor in only one configuration.
Different OSPF link attributes: OSPF links have a number of attributes, and the translation sometimes contains differences in link cost or passive interface settings.
Setting wrong BGP MED value: The translation of one BGP routing policy did not update the BGP MED value. This was caused by an error in translating one of the route map clauses from the original Cisco configuration.
Different Redistribution behavior into BGP: Cisco and Juniper formats handle route redistribution into BGP differently. Juniper typically does this using the same routing policies that control importing and exporting BGP routes while Cisco configurations set a separate route map for route redistribution. In our case, Campion detected that the Juniper configuration was redistributing some routes that the Cisco configuration did not. This could be fixed by adding a "from bgp" condition to a number of locations in the policy. Unlike the previously described errors, GPT-4 was unable to fix this when given the automatically generated prompt. Instead it usually does nothing when asked to fix the error. However, it was able to fix the problem when asked more directly to add "from bgp" conditions to routing policies.
BGP prefix list issues: Another subtle issue occurred when translating prefix lists. In the original Cisco configuration, a prefix list was defined to match prefixes with length 24 or greater where the first 24 bits matched 1.2.3.4. In Cisco this is done with the command:
ip prefix-list our-networks seq 5 permit 1.2.3.0/24 ge 24
and it was applied with the definition:
route-map from_customer deny 100
match ip address prefix-list private-ips
The noteworthy part is the "ge 24" which says to match prefixes with length 24 or greater. There is no equivalent of this in defining prefix lists in Juniper, but for our use case, there are at least two methods of getting similar behavior in Juniper with different syntax. When GPT-4 is asked to translate the configuration, it often does not translate the "ge 24" part correctly, often just omitting it, so the space of prefixes matched will differ in the translation. When asked to fix this problem, it sometimes generates configurations with incorrect syntax. For example, it can output the following:
prefix-list our-networks { 1.2.3.0/24-32; }
which is not valid Juniper syntax. However, after informing it of the error, it does eventually find a correct translation.
§ GLOBAL POLICIES VIA LOCAL SYNTHESIS
Next, we used GPT-4 to generate router configs for a given network topology based on local policies for each router, inspired by Lightyear <cit.>, which does control plane verification by
verifying local invariants. We limited our scope to BGP.
For semantic correctness, we use two new modules. The first is a 'topology' verifier which checks whether the config of a particular router follows the defined topology. It checks whether GPT-4 sets up all interfaces, declares BGP neighbors and announces networks correctly. Second, we run Batfish to check local policies defined in the prompts; the outputs are used to refine the result.
§.§ Method
We begin by specifying the task to GPT in an initial prompt using a couple of sentences. The intention is to influence the LLM to start `thinking' in a certain fashion. Our goal is to make the network follow the no-transit policy, under which no two ISP's should be able to reach other. However, all ISPs should be able to reach the CUSTOMER and vice versa.
It is difficult to write a natural language description of the topology, a task prone to human error.
We wrote an automated script that generates text given the topology as input. In our experiments, we limited our scope to star networks where one router would be attached to a CUSTOMER IP, while the other routers are connected to different ISPs (Figure <ref>). All the ISP routers are directly connected to the first router. The "network generator" therefore only needs the number of routers as input. It has two outputs: 1) a textual description and 2) a JSON dictionary for the entire network topology. The textual description is used as a prompt, while the JSON dictionary is used later to check whether the generated configs match the topology.
Local versus Global Policy Prompts? We tried specifying to GPT-4 the global no-transit policy
at once. GPT-4 generated two innovative strategies: filtering routes using AS path regular expressions, and denying ISP prefixes from being advertised to other routers from the customer router. Unfortunately, we found after correcting topology and syntax errors, when we provided feedback in terms of a counterexample packet (as would be provided by a “global" network verifier like Minesweeper), GPT-4 was confused and kept oscillating between incorrect strategies. We found that specifying local policies as in Lightyear <cit.> gave us better results because it allowed us to localize verification errors to specific routers and specific route maps within those routers.
We asked GPT-4 to generate configs for each router using a new prompt each time, specifying the local policy for each router. Specifically, the policy is that R1 should add a specific community at the ingress to each ISP and then drop routes based on those communities
at the egress to each ISP. The generated errors fell into three categories:
Syntax errors: GPT-4 generates a configuration with invalid Cisco syntax including errors in which certain config lines are misplaced. Batfish produces parse warnings identifying these errors.
Topology errors: GPT-4 incorrectly declares or misses some BGP neighbors or forgets to announce certain networks. For this, we use an automated "topology verifier" that compares the config against the previously specified JSON dictionary and outputs inconsistencies.
Semantic errors / Policy errors: GPT-4 produces configs that do not follow the intended local policy. We use Batfish "Search Route Policies" for verification in this step. In case there is a semantic error, Batfish produces an example where the local policy is not followed. This examples is then fed to GPT-4 in a fresh prompt.
Classifying into separate categories allowed us to use different tools to address each one. Table <ref> lists examples of the rectifying prompts. Once all the errors are rectified, we simulate the entire BGP communication using Batfish as a final step, in order to ensure that the global policy is satisfied, though the proof technique of Lightyear <cit.> could instead be used to ensure that the local policies imply the global one.
§.§ Experience and Results
Since some GPT-4 errors were more common, we supplied it an IIP (the Inital Instruction Prompt) as follows:
CLI prompts: GPT-4 would often generate commands to enter on the Cisco command line interface, which is undesirable. Thus we specifically asked it to generate the .cfg files.
Wrong keywords: While generating the configs, it would often use certain keywords such as `exit', `end', `configure terminal', `ip routing', `write', `hostname' and `conf t'. It had a tendency to place some of them in the wrong locations.
Hence, we directed it not to use these keywords. Any extra required commands for setup were prepended to the final config files, before running them on Batfish.
Match Community:
When trying to match against a community, it sometimes generates syntax like:
route-map FILTER_ROUTES permit 10
match community 100:1
This is incorrect. The correct way to match against a community in a route-map is to first declare a community list that contains the community as in:
ip community-list 1 permit 100:1
and then while matching, make a call to a community list:
route-map FILTERROUTES permit 10
match community 1
Thus we included another IIP to define a community list and then in a route-map, match using only this list.
Adding Communities: While adding communities using a route-map, GPT-4 tends to generate syntax similar to:
route-map ADDCOUMMUNITY permit 10
set community 100:1
We observed that this happens even when we explicitly ask it to `add' a community to the route. The above route-map replaces all the communities that are already present in the route with the community 100:1.
So we added an initial prompt saying that it should always use the 'additive' keyword when adding a community to the route.
These initial prompts along with the syntax rectification scheme of Table <ref> are able to eliminate common syntax errors produced by GPT-4. Despite this, we found two egregious cases where human intervention is needed:
Placing neighbor commands in the wrong location: In a config file for BGP, all network and neighbor commands must be placed under the "router bgp" block. For example, the neighbor command is used to attach a route-map to the ingress or egress of an interface. We found that in rare situations, GPT-4 defines a route-map and then associates it with an interface outside the "router bgp" block. Batfish is able to catch this syntax error. However, the output is not informative enough for GPT-4 to be able to fix the issue.
AND/OR Semantics in match statements: GPT-4 does not understand the semantic difference between placing multiple match conditions under a single route-map stanza versus placing them in different stanzas. For no-transit, we had asked GPT-4 to generate a config for R1 that would add a different community to every route incoming from R2-R6 (Figure <ref>). We also asked it to filter routes containing any such community on the egress of the interface connecting R1 to R2-R6. GPT-4 added the correct communities at the ingress, but
at the egress at R1 it incorrectly used AND semantics for filtering routes as in the following route-map for the R1-R2 interface:
[]
ip community-list 1 permit 100:1
ip community-list 2 permit 101:1
ip community-list 3 permit 102:1
ip community-list 4 permit 103:1
ip community-list 5 permit 104:1
route-map FILTER_COMM_OUT_R2 deny 10
match community 2
match community 3
match community 4
match community 5
route-map FILTER_COMM_OUT_R2 permit 20
Community 100:1 is associated with routes incoming from R2, 101:2 with those coming from R3 and so on. We desire routes incoming from R3-R6 to be filtered out at the egress to R2. The above config will only filter out routes that have all the communities 101:1, 102:1, 103:1 and 104:1, not any one of them. When we asked Batfish whether the above route-map filters routes that have the community 101:1, it produced a counterexample but this counterexample to GPT-4 failed to rectify the issue. A human prompt was needed to ask GPT-4 to declare each match statement in a separate route-map stanza. Our attempts to help GPT-4 distinguish between AND and OR semantics using an example in the IIP also failed.
Leverage: The entire cycle took 2 human prompts and 12 automated prompts, for a leverage of 6X.
Note that the AND-OR problem required a final correction prompt.
§ PREVIOUS WORK
Jigsaw <cit.> and Copilot <cit.> use large language models for program synthesis. While they concentrate on
sequential programs, the deeper difference is that they do not pair the synthesizer with verifiers. Jigsaw <cit.> does ask users to provide test cases and tests (but does not verify) the synthesized program. Jigsaw also does some form of automatic syntax correction doing AST-to-AST transformations. CoPilot <cit.> can suggest invariants
but does not attempt an axiomatic proof. Jigsaw and Copilot do not address two questions we do: how to use
a specification, and how to provide localized feedback.
The use of ChatGPT with the Kani Rust verifier <cit.> comes closest to our vision; the Kani blog post finesses the specification question (as we do for Cisco to Juniper) by focusing on program transformations (in their case optimization) for which the source program is the specification. They also do not use modularity or local specifications. More fundamentally the Kani <cit.> use case does not do prompt programming: the user always manually switches between the verifier and the LLM, precluding possible leverage.
§ CONCLUSIONS
Our experiments are very preliminary but suggest:
1. Ramanujam Effect: As with the mathematician Ramanujam, some of whose conjectures were incorrect and needed Hardy's help <cit.> for proofs, GPT-4 by itself is not ready for use without a verifier, making elementary errors that can
bring networks down.
2. Verified Prompt Programming: Using a verifier and automated corrections via a humanizer,
GPT-4 can synthesize reasonable but not completely correct configurations for simple use cases, but the leverage in
reduced human effort can be high (5X to 10X). Modular verification seems crucial.
4. Local versus Global Specifications: Modular synthesis is the dual to modular verification. The search space for the LLM is large, which increases the chance that it will not be able to correctly complete a synthesis task based on a global specification. Instead the user needs to decide and describe the "roles" each node plays in satisfying the global spec.
Much further testing in more complex use cases is needed. Can GPT-4 add a new policy incrementally without interfering with existing verified policy? While our paper is set in the context of network configuration, the vision, definitions (e.g., leverage) and lessons (e.g., the need for actionable local feedback, modularity, humanizers and IIPs) seem more generally useful to synthesize other programs.
abbrv
|
http://arxiv.org/abs/2307.04425v1 | 20230710090012 | Identification of Hemorrhage and Infarct Lesions on Brain CT Images using Deep Learning | [
"Arunkumar Govindarajan",
"Arjun Agarwal",
"Subhankar Chattoraj",
"Dennis Robert",
"Satish Golla",
"Ujjwal Upadhyay",
"Swetha Tanamala",
"Aarthi Govindarajan"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
Article Title]Identification of Hemorrhage and Infarct Lesions on Brain CT Images using Deep Learning
[1]Arunkumar [email protected]
2]Arjun [email protected]
[2]Subhankar [email protected]
2]Dennis [email protected]
2]Satish [email protected]
2]Ujjwal [email protected]
2]Swetha [email protected]
1]Aarthi [email protected]
*[1]Aarthi Scans & Labs, Chennai, Tamil Nadu, India
[2]Qure.ai, Mumbai, Maharashtra, India
Head Non-contrast computed tomography (NCCT) scan remain the preferred primary imaging modality due to their widespread availability and speed. However, the current standard for manual annotations of abnormal brain tissue on head-NCCT scans involves significant disadvantages like lack of cutoff standardization and degeneration identification. The recent advancement of deep learning-based computer-aided diagnostic (CAD) models in the multidisciplinary domain has created vast opportunities in neurological medical imaging. Significant literature has been published earlier in the automated identification of brain tissue on different imaging modalities. However, determining Intracranial hemorrhage (ICH) and infarct can be challenging due to image texture, volume size, and scan quality variability. This retrospective validation study evaluated a DL-based algorithm identifying Intracranial hemorrhage (ICH) and infarct from head-NCCT scans. The head-NCCT scans dataset was collected consecutively from multiple diagnostic imaging centers across India. The study exhibits the potential and limitations of such DL-based software for introduction in routine workflow in extensive healthcare facilities.
[
[
August 12, 2023
===================
§ INTRODUCTION
In Cognitive Neuroscience, Neuropsychological investigation of stroke patients is widely utilized in advancing our knowledge of brain functions. The considerable insight into the relation of the brain function to its anatomy has been determined via correlation analysis between physical brain damage and impaired behavior <cit.><cit.><cit.>. The stroke topology can be broadly classified into two types: 1) Intracranial hemorrhage (ICH), the rupture blood vessel within the brain which causes bleeding. The common factors related to the cause of ICH are advanced age, heavy alcohol usage, and high blood pressure (hypertension) <cit.>. As per some recent studies, although ICH accounts for 10–15% of all stroke-related deaths, over the last thirty years, the mortality and morbidity have not changed, particularly in developing countries <cit.>. 2) Ischemic stroke or infarct, is interruption of blood flow due to blood clot. Infarct is generally caused by the
buildup of plaques (atherosclerosis) over time in the arteries. Globally, over 13.7 million individuals have a stroke each year, of which approximately 70%, i.e., 9.5 million, are infarct <cit.>. Presently, mapping of the stroke lesion is regularly done using Computed tomography (CT) and magnetic resonance imaging (MRI). The MR (T1- weighted and T2- weighted) anatomical images are acquired as a part of routine practice for stroke patients. In stroke suspected patients with negative CT scans, MRI can also be performed. After the first few hours of onset, the ischemic stroke can be identified using the MRI. Additionally, the differentiation of irreparably damaged brain tissue and the tissue at risk due to infraction can be diagnosed using the MRI. However, CT is the preferred imaging modality over MRI in acute stroke care units and clinical trials due to the reduced exclusion criteria compared to MRI, affordability, speed, and accessibility <cit.>. In CT, hemorrhage is percieved as the bright region (hyper-dense) exhibiting sharp contrast and infarct as dark region (hypo-dense) depending on the time progressed after the onset.
The manual annotations of abnormal brain tissue by trained neuroradiologists is currently the present standard method for lesion identification <cit.>. However, the manual annotations of abnormal brain tissue have many disadvantages <cit.>. 1) Lack of cutoff standardization:, There is no standard protocol for explicit cutoff, particularly around the ventricles, to differentiate lesioned and non-lesioned tissues; as a result, this approach produces large variability and lacks reproducibility across operators. 2) Degeneration identification: The stroke-induced degeneration occurring in chronic stroke patients outside the lesion is not captured in the standard manual annotations process, even though a significant clinical impact on patients is caused due to the stroke-induced degeneration. The recent advancement of deep learning based computer aided diagnostic (CAD) models in medical imaging and signal processing can significantly assist in overcoming the existing challenges <cit.><cit.><cit.><cit.><cit.>. In addition, the manual editing combined with an automated detection solution of hypo- or hyper-dense regions that remains under operator supervision and can assist in overcoming the present challenges <cit.>. More recently, a study using large CT datasets to remove the inter-subject variability in brain lesion characterization using an automated approach was proposed <cit.>. Several state-of-the-art algorithms have been proposed for lesion segmentation in MR images over the past few years, but very few have been developed to address stroke lesions on CT scans. Most of the earlier work published to validate automated solutions was directed toward identifying ICH. As the ICH appears bright in CT scans, developing an automated solution based on supervised or unsupervised learning algorithm or extracting morphological features from labeled images to differentiate between true lesioned and non-lesioned tissues is less challenging <cit.> <cit.>.
Infarct identification, on the other hand, is a less popular problem statement among researchers compared to ICH detection due to its challenging nature. To address this issue very recently, a rule-based approach based on seeded region-growing algorithms was proposed via extracting hand-crafted features such as relative position for an axis of symmetry, texture, and brightness <cit.>. However, the primary disadvantage of this study is that the seeded region-growing algorithms may not be able to define the boundaries of the stroke region distinctively.
In this study, we have evaluated an Artificial Intelligence (AI) based automated CAD algorithm based on deep learning, capable of identifying ICH and infarct on Head-Non-contrast Computed Tomography (Head-NCCT) scan. The solution has been earlier validated on detecting ICH on Head-NCCT scan images <cit.>. The Institutional Review Board (IRB) has approved the proposed retrospective study. We demonstrated the effectiveness and validity of the automated CAD solution in detecting ICH infarct and quantifying infarct on Head-NCCT scan. Our proposed validation will provide a rapid and efficient tool for both research and clinical application. It will assist in the broader adaptation of automated CAD solutions at extensive clinical facilities.
§ MATERIAL AND METHODS
The study was a HIPAA-compliant retrospective study with Institutional Review Approval (IRB) from Royal Pune Independent Ethics Committee (RPIEC) (IRB No. RPIEC240123). Informed consent was obtained from all participants. All methods were carried out in accordance with relevant guidelines and regulations.
The primary objective was to evaluate the commercially available deep learning-based algorithm qER (Qure.ai Technologies, Mumbai, India) in terms of Area Under the Receiver Operating Characteristics Curve (AUC) in triaging Head-NCCT scan in detection and quantification of infarcts. It was estimated that a minimum sample of 418 Head-NCCT scans (167 Head-NCCT scans image with radiologist-confirmed infarcts, 251 Head-NCCT scans images without infarcts, 2:3 ratio) would provide a minimum of 80% power to estimate an anticipated AUC of 80% with 7% precision assuming a Type I error rate of 5% <cit.><cit.>. The Head-NCCT scans, and their signed-off original radiological report performed from 01-September-2021 to 31-August-2022 were acquired from diagnostic imaging centers across India. A total of 1878 Head-NCCT scan were collected. The original radiological report of these scans was subjected to a manual review by a clinical data abstractor to classify the scans into infarct, and non-infarct reported scans based on the original radiological report. A stratified random sample of 500 Head-NCCT scans stratified by the presence and absence of infarct (based on the original radiological reports) were then selected for independent ground truthing by a radiologist with more than fourteen years of experience. The inclusion criteria were Head-NCCT scans with soft reconstruction kernel covering the complete brain, slice thickness ≤ 6mm. The exclusion criteria were Head-NCCT scans with obvious postoperative defects or from patients who had previously undergone brain surgery, Head-NCCT scans with artifacts such as burr holes, shunts or clips, Head-NCCT scans containing metal artifacts, excessive motion artifacts, Head-NCCT scans containing missing and improperly ordered slices. The ground truther radiologist had access to the original head NCCT scan image but was blinded to the original radiology report. The ground truther reviewed all the Head-NCCT scans and provided segmentation boundaries for infarcts and intracranial hemorrhages. The ground truther radiologist also provided a binary response for the presence or absence of cranial fracture, midline shift, and mass effect. The ground truth output was the reference standard for all downstream statistical analyses, not the original radiological report.
The sensitivity and specificity were estimated based on a default device threshold (available from the manufacturer based on internal testing), and the optimum threshold was based on Youden's index. The 95% confidence intervals for sensitivity and specificity are reported based on exact method <cit.>. AUC and 95% confidence interval (CI) was estimated based on the empirical method and De Long methodology, respectively <cit.>. The segmentation provided by the ground truther radiologist was utilized for the quantification analysis of the error in the predicted infarct volume by the DL-based algorithm. Absolute errors in infarct volume estimation in milliliter (mL), and summary statistics of absolute errors were reported. The statistical analyses were performed using RStudio (RStudio version 2022.07.1, R version 4.2.1) and Python version 3.9.7.
§ EXPERIMENTAL RESULTS
§.§ Identification of ICH and Infarct
The ground truthing was completed for 428, while 22 Head-NCCT scan were excluded due to the inclusion and exclusion criteria mentioned in section <ref>. A total of 187 Head-NCCT scan confirmed (based on ground truth) the presence, while 241 Head-NCCT scan confirmed the absence of any infarcts. This distribution of scans with and without infarcts met the minimum sample size requirements described earlier in <ref>. In addition, 21 scans with intracranial hemorrhages (ICH) and 23 scans with cranial fractures were present in the sample. A total of 212 (49.5%) of the 428 Head-NCCT scans did not contain any infarcts, intracranial hemorrhages, cranial fracture, midline shift, or mass effect. The distribution of the Head-NCCT scans is shown in Table. <ref>.
It can be observed from Table. <ref> that the DL-based algorithm achieved an AUC of 86.8% (95% CI: 83.4 - 90.2) in detecting scans with the presence of infarcts while the sensitivity and specificity were estimated to be 66.8% (95% CI: 59.6-73.5)and 86.7% (95% CI: 81.8-90.7) respectively at the default threshold. The optimum operating threshold was determined using Youden’s index. At this optimum threshold, it was observed that the sensitivity of the DL-based algorithm improved to 80.2% (95% CI: 73.8 - 85.7) without substantial reduction in specificity 80.1% (95% CI: 74.5 - 84.9). For ICH, an AUC of 94.8% (95% CI: 87.4 - 100) was achieved. There was no change in sensitivity compared to the default and optimum threshold, while the specificity increased by 3% using the optimum threshold. In contrast, the sensitivity of cranial fracture compared to the default and optimum threshold, an enhancement of 15.8% was observed while the specificity decreased by 2.7%. In Fig. <ref>, the AUC-ROC plot for Cranial Fracture, ICH, and Infarct is given.
§.§ Quantification of Infarct Volume
The DL-based algorithm for identifying infarcts produces the infarct volume in mL. A total of 150 true positive scans for which both DL-based algorithms predicted volume and ground truth volume were available for this analysis. The reference standard was radiologist annotations done for each Head-NCCT scan images.
The mean absolute error (MAE) was 4.7 mL for overall scans. Based on ground truth volume, the scans were further divided into two categories - scans with 0 - 5 mL and > 5 mL infarcts volume, respectively. It can be observed from Table. <ref> that the MAE for 0 - 5 mL and > 5 mL scans were found to be 3.2 mL and 8.557 mL, respectively. In Fig. <ref> from the scatter plot of infarct volumes (1), it can be observed that with an increase in infarct volume, there is a positive correlation between DL-based algorithm volume and ground-truth annotated volume. The Bland-Altman plots showing good agreement between the ground truther annotation and predicted volume by the DL-based algorithm are shown in Fig. <ref> (2).
§.§ Visual Explanations of DL-based Algorithm
The experimental findings depict that the evaluated DL-based algorithm achieved superior performance represented in Table. <ref> and <ref>. In most DL-based models the rationale behind the prediction is not reveled explicitly. Since these DL black box models can not be decomposed into intuitive and comprehensive modules, these models are hard to interpret. Consequently, the end-users develop skepticism and find the model difficult to trust. The emergence of explainable artificial intelligence (XAI) is an essential aspect of model transparency and the social right to explain DL inferences <cit.>,<cit.>. XAI encompasses a better understanding of incoherent output, isolates failure modes, and builds trust in intelligent systems for effective incorporation into our everyday lives <cit.>. The present evaluated DL-based algorithm outputs a boundary across the infarcts which revels rationale behind the superior performance. In Fig. <ref> is can be observed that for both small and large infarcts volume on Head-NCCT scan, the model predicted boundary clearly overlaps with the ground truther boundary.
§ DISCUSSION
This retrospective study evaluated a deep learning algorithm for detecting infarcts in Head-NCCT scans. The algorithm had a good AUC of about 86% in detecting infarcts. After adjusting for thresholds, a balanced sensitivity of 80.2% and specificity of 80.1% was estimated to detect infarcts. The algorithm's sensitivity in detecting infarcts in scans with no other target abnormalities was found to be 80% (136 correctly detected out of 170). It did not differ from the overall sensitivity at optimum sensitivity. This states the robustness of the DL-based algorithm to identify infarcts with negligible drop in sensitivity with presence of other abnormalities. Additionally, it is to be noted that the sensitivity of Head-NCCT scans in detecting infarcts is generally considered low, especially in the case of hyperacute and acute ischemic strokes. In one study, the sensitivity of detecting acute ischemic stroke on head NCCT scans ranged from 57% to 71% with considerable inter-reader variability <cit.><cit.>. Additionally, we evaluated the performance to detect ICH and cranial fracture, and both had excellent AUC. However, the interpretation is limited by low sample sizes for these two abnormalities. Our results also show that threshold adjustments might be needed before using such algorithms routinely for clinical decision support.
Deep learning or big data are often called "black box" and represent substantial obstacles in introducing intuitive and comprehensive modules into actual clinical practice; these models are challenging to interpret. However, the DL-based method validated in this study provides a post-hoc attention tool for the clinician to identify the lesion visually. In addition, the DL-based algorithm validated in this study encompasses a better understanding of incoherent output, isolates failure modes, and builds trust in intelligent systems for effective incorporation into routine clinical practice. Moreover, the proposed validation of the DL-based algorithm will be beneficial in the resource constraint areas with a limited number of radiologists or with only access to teleradiology facilities.
Our study has limitations. First, the differentiation of infarct into acute and chronic infarct was not analyzed. Second, the ground truthing for the head NCCT scans images with the presence of infarcts was done by a single radiologist. Thirdly, there were not enough scans for the ICH and cranial fracture to estimate performance metrics with sufficient precision.
§ CONCLUSION
The present study evaluated a DL-based algorithm to determine the presence and absence of ICH and infarcts on head-NCCT scans. The DL-based algorithm demonstrated high detection performance rate in identifying infarcts, ICH, and cranial fracture. Additionally, the DL-based algorithm exhibits a positive correlation between DL-based algorithm volume and ground-truth annotated volume. The study demonstrated the performance of ICH detection and infarcts detection and quantification to indicate the feasibility of introduction of such DL-algorithms in routine workflow in extensive healthcare facilities.
§ DATA AVAILABILITY
The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.
|
http://arxiv.org/abs/2307.05197v1 | 20230711120825 | Quantum entanglement patterns in the structure of atomic nuclei within the nuclear shell model | [
"A. Pérez-Obiol",
"S. Masot-Llima",
"A. M. Romero",
"J. Menéndez",
"A. Rios",
"A. García-Sáez",
"B. Juliá-Díaz"
] | nucl-th | [
"nucl-th",
"quant-ph"
] |
APS/123-QED
[email protected]
Barcelona Supercomputing Center, 08034 Barcelona, Spain
[email protected]
Barcelona Supercomputing Center, 08034 Barcelona, Spain
[email protected]
[email protected]
[email protected]
Departament de Física Quàntica i Astrofísica (FQA), Universitat de Barcelona (UB), c. Martí i Franqués, 1, 08028, Barcelona, Spain
Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona (UB), c. Martí i Franqués, 1, 08028 Barcelona, Spain
[email protected]
Barcelona Supercomputing Center, 08034 Barcelona, Spain
Qilimanjaro Quantum Tech, 08007 Barcelona, Spain
[email protected]
Departament de Física Quàntica i Astrofísica (FQA), Universitat de Barcelona (UB), c. Martí i Franqués, 1, 08028 Barcelona, Spain
Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona (UB), c. Martí i Franqués, 1, 08028 Barcelona, Spain
Quantum entanglement offers a unique perspective into the underlying structure of strongly/correlated systems such as atomic nuclei.
In this paper, we use quantum information tools to analyze the structure of light and medium-mass berillyum, oxygen, neon and calcium isotopes within the nuclear shell model.
We use different entanglement metrics, including single-orbital entanglement, mutual information, and von Neumann entropies for different equipartitions of the shell-model valence space
and identify mode/entanglement patterns related to the energy, angular momentum and isospin of the nuclear single-particle orbitals.
We observe that the single-orbital entanglement is directly related to the number of valence nucleons and the energy structure of the shell,
while the mutual information
highlights signatures of proton-proton and neutron-neutron pairing.
Proton and neutron orbitals are weakly entangled by all measures, and in fact have the lowest von Neumann entropies
among all possible equipartitions of the valence space. In contrast, orbitals with opposite angular momentum
projection have relatively large entropies.
This analysis provides a guide for designing more efficient quantum algorithms
for the noisy intermediate-scale quantum era.
Quantum entanglement patterns in the structure of atomic nuclei within the nuclear shell model
B. Juliá-Díaz
August 12, 2023
==============================================================================================
§ INTRODUCTION
Entanglement is a fundamental concept in quantum mechanics <cit.>. It characterizes correlations between particles or, in general, partitions within a system that can not be described independently of one another.
Quantum many-body systems also show signatures of entanglement, with specific features in many-fermion systems <cit.>. In addition, quantum entanglement is important from a theoretical point of view.
Entanglement properties typically undergo significant changes
near phase transitions, such as in spin and Fermi-Hubbard <cit.> systems at their critical point.
In high-energy physics, maximal entanglement has been used to constrain the coupling structure
of QED <cit.>. In contrast, entanglement suppression
has been conjectured to be a property of low-energy strong interactions <cit.>.
Quantum or classical simulations of many-body systems may be hampered if the entanglement structures couple different partitions.
A sound understanding of the entanglement features of quantum many-body systems may thus be key to more efficient simulations.
Consider, for instance, a single partition of a given fermionic system.
Low entanglement between two parts of a system may allow for simpler simulations for each
of the subsystems. If these simulations can be complemented with an effective way to integrate the residual entanglement between the partitions, such strategy may lead to results with a minor loss in precision at a fraction of the computational cost.
Analogously, ground states in condensed matter systems typically follow an area law, meaning that entanglement scales with the boundary of the partition, rather than with its volume <cit.>.
This allows one to use techniques such as density matrix renormalization group <cit.> or tensor networks <cit.> to efficiently simulate large systems.
However, in nuclear physics, entanglement has been much less studied. This is
in part due to the complex nature of the nuclear force, but also due to the difficulty
to relate entanglement to measurable observables.
In very light nuclei, like ^4He and ^6He, the entanglement structure was
found to be highly dependent on the many-body basis <cit.>.
In nucleon-nucleon scattering, proton-neutron pairs are found to be entangled <cit.>.
Yet, nuclear shell model simulations of mid-mass isotopes indicate that protons and neutrons show very little entanglement <cit.>. Entanglement is also relevant for the dynamics of nuclear reactions <cit.>.
Interestingly, a proposal has been recently put forward, indicating that nuclear matter follows a volume law, instead of an area law <cit.>.
Entanglement plays a pivotal role in quantum-resource quantification for quantum computation, communication, and sensing.
In the context of quantum simulations, variational algorithms have
been devised and tested to reproduce ground states of quantum
many-body systems. In particular, Ref. <cit.> analyzed single- and double-orbital entanglement for ^8Be within the nuclear shell model. Entanglement measures were found to be
almost maximal, as a consequence of the strong correlations in
the the p shell.
Recently, single-orbital entanglement within the nuclear shell model has been studied with an adaptive variational quantum eigensolver <cit.>
for various nuclei across the p, sd and pf shells <cit.>. In addition, Ref. <cit.> explored entanglement-based partitions using
neural networks for optimal simulations of p-shell nuclei.
In spite of this rapid progress, no thorough analysis of the entanglement structure of atomic
nuclei across different shells has been carried out yet.
In this work, we analyze mode entanglement in nuclear shell model ground states.
We provide an overall picture of the entanglement structure for the isotopic chains of Be, O, Ne and Ca, studied in the p, sd, and pf configuration spaces.
Figure <ref> shows a diagram representing the corresponding configuration spaces.
We use three different entanglement measures: single-orbital entanglement, mutual information, and von Neumann entropies, for the most natural equipartitions of
these configuration spaces,
providing compelling insights into the entanglement of the different divisions.
This article is organized as follows: in Section <ref> we provide an introductory outline of the nuclear shell model, our method of choice to find ground states of nuclei. In Section <ref>, we introduce the different measures that we use to quantify the entanglement in the nuclear configuration space. Finally, we present our results in Section <ref> and our conclusions in Section <ref>.
§ OVERVIEW OF THE NUCLEAR SHELL MODEL
The nuclear shell model is one of the most successful theories of nuclear structure <cit.>. It considers nuclei as composite systems of protons and neutrons, or nucleons, that interact with each other in a restricted configuration space, customarily called valence space. The nuclear interaction is rotationally invariant, and it is usually considered to be symmetric under proton-neutron exchange. One of the main features of the nuclear interaction is a spin-orbit term responsible for the so-called magic numbers: special combinations of protons (Z) and neutrons (N) building up particularly stable, spherical nuclei <cit.>. This justifies the main assumption of the shell model, that nuclear dynamics can be approximated by the many-body configurations built in a valence space limited by two magic numbers. The valence spaces considered in this work are presented in Fig. <ref>. Single-particle states below the valence space are fully occupied and form an inert core, whereas states above are truncated based on the large energy gaps between magic number configurations.
Given the symmetries of the nuclear interaction between particles in the valence space, the single-particle states (or single-particle orbitals) can be labelled using a set of quantum numbers {n,l,j,m,t_z}. These correspond to the principal quantum number n, the orbital angular momentum l, the total angular momentum j (resulting from the coupling of l with the spin s=1/2 of nucleons) and its third-component projection m. The third-component projection of the isospin, t_z, specifies if a nucleon is a proton or a neutron. The corresponding 2j+1 energy-degenerate single-particle states are grouped into the nl_j subshells, as shown in Fig. <ref>.
In a second quantization scheme, the effective Hamiltonian in the valence space reads
H_eff = ∑_i ε_i a_i^† a_i + 1/4∑_ijklv̅_ijkl
a_i^† a_j^† a_l a_k,
where ε_i is the energy of the single-particle state i, v̅_ijkl = v_ijkl - v_ijlk are antisymmetrized two-body matrix elements and a_i (a_i^†) are particle annihilation (creation) operators associated to the state i. In this work, we use standard phenomenological Hamiltonians, with components adjusted to reproduce key properties of selected nuclei <cit.>. These Hamiltonians describe very well the low-energy properties of light and medium-mass nuclei across the nuclear chart <cit.>. Effective Hamiltonians can also be derived based on an effective theory of the fundamental theory of the nuclear force, quantum chromodymamics, using so-called ab initio techniques <cit.>.
Many-body states in the valence space are described employing antisymmetrized products of single-particle states, also referred to as Slater determinants. A standard choice to build this many-body basis is to use the M-scheme, in which Slater determinants have a well-defined third component M of the total angular momentum J. Because of the properties of the SU(2) algebra of angular momentum <cit.>, M is simply the sum of the total
m components of the single-particle states occupied by the nucleons. These many-body states form a basis of the valence space, and the ground and excited states of the nucleus can be expanded as
| JM TT_z⟩ = ∑_α c_α | α, M T_z⟩,
where the c_α coefficients are obtained by solving the many-body Schrödinger equation, for instance through the diagonalization of the Hamiltonian matrix in the many basis <cit.>.
These eigenstates have good angular momentum J and isospin T
quantum numbers, with corresponding third-component projections M and T_z. State-of-the-art shell-model codes use sophisticated Lanczos methods for the Hamiltonian diagonalization, which often require classical supercomputuing resources.
The nuclear shell model is a reference method for light- and medium-mass nuclei, but calculations become unattainable for heavy nuclei. As the number of valence nucleons increases, the number of many-body states in the valence space grows exponentially, quickly reaching a bottleneck where calculations are no further feasible with current classical supercomputers. Quantum information tools may help identify crucial correlations in the shell-model valence space and may facilitate systematic and well-controlled truncation protocols, that include the most relevant degrees of freedom <cit.>. While this challenge is important from a fundamental nuclear structure point of view, it is also pertinent to optimise the performance of quantum simulations in the noisy intermediate-scale quantum era.
Promising implementations of the nuclear shell model in digital quantum computers have been
proposed using variational quantum eigensolvers <cit.> and quantum Lanczos <cit.> algorithms.
§ ENTROPY AND MUTUAL INFORMATION FOR ENTANGLEMENT ASSESSMENT
Entanglement quantifies the inseparability between quantum systems. When two systems A and B, characterized by states |ψ_A⟩ and |ψ_B⟩, are entangled,
the complete state |ψ⟩ can not be written in the form
|ψ⟩ = |ψ_A⟩⊗|ψ_B⟩,
that is, as a tensor product of the individual states. If the states are separable, as opposed to entangled, the statistics and behavior of each system can be treated independently.
This is why, when there is low entanglement, classical resources can simulate quantum systems efficiently (although this is not the only case <cit.>). Therefore, finding partitions that exhibit low entanglement is of utmost importance.
In fact,
a natural split is already present in the nuclear shell model. As discussed in Sec. <ref>,
the a priori separation between the inert core, the valence space and the excluded space assumes that there is no entanglement between these spaces.
In this work, we focus on entanglement measures in the valence space, where additional insight of the entanglement structure could improve nuclear shell-model calculations.
We specifically consider bipartite entanglement, considering two generic partitions
within the system.
Although multipartite entanglement is complex and still a subject of intensive study <cit.>, the case of bipartite entanglement is well understood <cit.> and linked to quantitative metrics usually referred to as entropies.
In this context, a standard choice is the von Neumann entropy S, defined as
S(ρ) = - Tr(ρlog_2 ρ) = - ∑_i ρ_i log_2 ρ_i,
where ρ_i are the eigenvalues of the density matrix ρ, and where the logarithm basis is 2 for qubits. For a single Slater determinant |ψ⟩, the density matrix is pure, ρ=|ψ⟩⟨ψ|. Pure density matrices have a single non-zero eigenvalue, and consequently no von Neumann entropy, S(ρ)=0. When considering a partition of the whole system into subsystems A and B, the reduced density matrix of subsystem A is obtained by tracing out the degrees of freedom of subsystem B from ρ, that is ρ_A = _B(ρ). If the state of the whole system is separable, as in Eq. (<ref>), the corresponding trace results in a pure state
ρ = |ψ_A⟩⊗|ψ_B⟩⟨ψ_A|⊗⟨ψ_B|→ρ_A = |ψ_A⟩⟨ψ_A| .
As a consequence, the von Neumann entropies of the subsystems are also zero,
S(ρ_A)≡ S(A) = 0. In contrast, for a Bell-type entangled state |ψ⟩ = (|0⟩_A|0⟩_B + |1⟩_A|1⟩_B)/√(2),
we find
ρ_A= 1/2|0⟩⟨0| + 1/2|1⟩⟨1|→ S(A) = 1 .
This highlights how the von Neumann entropy quantifies our notions of entanglement for bipartitions.
An illustrative example is the partition of one single-particle orbital and the rest of the system. In this case, the entropy has already been linked to the occupation number <cit.> and, more recently, to the choice of single-particle basis in the shell model <cit.>. Starting from Eq. (<ref>) for this single-particle bipartition, one can show that the
entropy is
S_i = -γ_i log_2γ_i - (1-γ_i)log_2(1-γ_i),
with γ_i = ⟨ψ |a_i^† a_i| ψ⟩ the occupation number (or occupation probability) of the single-particle orbital i in a system described by |ψ⟩. Figure <ref> shows the single-orbital entropy S_i as a function of the occupation number γ_i.
Single-orbital entropies are maximal when the occupation numbers are as likely to be filled than to be empty, γ_i=1/2.
In contrast, states that are almost fully occupied or fully unoccupied have near zero single-orbital entropy.
We also take into consideration other useful entanglement metrics. The conditional entropy, S(A|B), illuminates the dependence of subsystem A's degrees of freedom on subsystem B. It is calculated as
S(A|B) = S(AB) - S(B),
where S(AB) is the entropy of the joint system. Using this metric, we can describe the mutual information of two systems A and B,
S(A;B) = S(A) - S(A|B)
= S(A) + S(B) - S(AB),
which we use in this work with the simplified notation S_A,B. The mutual information is symmetric under the exchange of its arguments. It provides an insight on how much subsystems A and B are correlated when ignoring the degrees of freedom of the rest of the system. Specifically, in the context of A and B being subsystems of a larger system ABC, a low amount of mutual information unveils that, even if the state of the total system is not separable in states of subsystems A and B (and thus entangled), such entanglement is linked to C and not contained in AB.
§.§ Fermionic systems
In the case of fermionic systems,
quantifying entanglement is especially challenging because we lack a well-defined underlying separable space, as used in the definition of Eq. (<ref>).
Fermions are identical particles which fulfil Pauli's exclusion principle, and a many-body fermionic state must be antisymmetric.
For example, two spin-1/2 fermions can couple their spins to form a singlet state, with total spin 0, in first quantization.
In second quantization, on the other hand, this corresponds to creating two modes on the vacuum state, a_↑^† a_↓^†|0⟩, using mode creation operators of spin projections up and down. With the right encoding of fermions onto qubits, this ends up as a separable state |11⟩,
1/√(2)( |↑↓⟩ - |↓↑⟩)_singlet state
=
a_↑^† a_↓^†|0⟩ |11⟩.
This separable expression is in contrast with the first-quantization expression.
In other words, the singlet state in the first-quantized particle basis
is considered maximally entangled, while the corresponding qubit state
has zero entanglement.
We note that these anomalies are directly related to the indistinguishability of particles. In other words, any partition that separates distinguishable particles, such as neutrons and protons, does not exhibit this problem.
Entanglement quantification measures of bipartitions of identical particles, however, need to address this issue.
Different approaches to a proper characterization have been proposed <cit.>, favouring those in second quantization for their consistency. This motivates our choice of encoding.
In the implementation of fermionic systems on quantum circuits, the encoding between qubits and single-particle degrees of freedom is very important.
The advantages of different fermionic mappings have been studied extensively <cit.>, although usually under the scope of performance and scalability. In other words, the focus has been on
how efficiently one can encode a specific system in terms of number of qubits and circuit depth.
Because operators on different qubits commute freely, but operators on fermions do not, each encoding must balance the locality of the original system's degrees of freedom against a method to modify the system's parity each time one acts on the state.
One of the most common fermionic mappings, the Jordan-Wigner encoding <cit.>, lies at one extreme. Qubits correspond exactly to single-particle states in the fermionic system, at the cost of local operators on the fermions becoming completely delocalized on the qubits.
In our analysis of entanglement in the nuclear shell model, we use the Jordan-Wigner encoding because it becomes advantageous on two fronts. Firstly, it allows us to simplify the treatment of fermionic entanglement by using second quantization. Having a fixed particle number avoids the need for more complex figures of merit <cit.>. Secondly, it provides a direct connection between specific qubits and single-particle orbitals, as indicated by the labels in Fig. <ref>.
In actual circuit simulations, the Jordan-Wigner encoding may increase the circuit depth compared to other encodings, due to the non-locality of the encoded fermionic operators. This disadvantage, however, may be offset if one finds
low-entangled partitions, which allow for more efficient simulations.
§.§ Maximal entropy states
Entanglement measures can only be used to identify relevant quantum features if there is a notion of maximal entanglement. By construction, the nuclear shell model constrains the number of allowed many-body states to those included in the valence space, and therefore the maximal entropy. This means that it is not sufficient to focus on the dimension of the Hilbert space obtained after the fermionic encoding.
For example, let us consider ^8Be, with two valence neutrons and protons in the p shell (see Fig. <ref>). The ^8Be ground state has J=M=0. Since there are 12 single-particle states, we need 12 qubits in the Jordan-Wigner mapping to
encode all the possible fermionic excitations. The dimension of the Hilbert space in this computational basis is thus 2^12. After an arbitrary equipartition, the resulting spaces of the subsystems would have a dimension of 2^6. However, to reach the maximal entropy S_max between two partitions A and B, one must actually be able to build
a state of the form
|Ψ⟩_shell = ∑_i^2^S_max c_i |ψ_i⟩_A ⊗|ϕ_i⟩_B ,
according to the Schmidt decomposition <cit.>. This, however, is not possible for S_max=6 in ^8Be, due to proton and neutron number conservation and the constraint M=0.
Therefore, the dimension of this truncated space after a partition is only an upper bound for the von Neumann entropy, and is in fact unreachable. A better bound would be the dimension of the many-body basis when considering the modes in the partition.
We can
find all possible product states by running through each possible M_Z∈{ -2,-1,0,1,2} value for protons, and pairing them with any of the neutron states with opposite M_N=-M_Z. In the proton-neutron partition, there are 15 possible ways to arrange the 2 protons in ^8Be, and we can pair each of these with the neutron state that mirrors the occupations over the sign of M to form a state of the form in Eq. (<ref>). Constructing a state with higher entropy is not possible because there are no more elements of the basis, so S=log_2(15)=3.9<6 is the maximum limit.
An additional caveat must be considered when looking at general bipartitions. Let us consider the proton-neutron partition again, in a nucleus with more neutrons than protons below the half-filling of the valence space, as for example ^10Be or ^22Ne. Clearly, the neutron many-body basis has a bigger dimension and conditions which of the two partitions limits the entropy, S. For general bipartitions, however, there is no guarantee that
the smallest dimension of the two bipartitions limits the maximum entropy. Let us illustrate this with the bipartition of orbitals with opposite m for ^8Be, which seems to maintain the symmetry across protons and neutrons. We can find some combinations where two states on the m>0 partition ([1,7,10] and [4,7,10] following the numbering of the p shell in Fig. <ref>) that can only be paired with one basis element in the other one ([3]), meaning that they can not contribute to S as two separate elements of Eq. (<ref>).
In principle, straightforward algorithmic efforts to check all combinations for a given bipartition are unreachable by classical computation due to their exponential scaling. A more naive approach is doing the pairing only for a few examples. Even if this is non-exhaustive, it already provides a lower bound for the maximal entropy at a small computational cost. This is enough to decide whether the entropy of that partition is small in relative terms - it can only become smaller by saturating the bound - and we can do so for all partitions.
In addition, we systematically check that for the most characteristic proton-neutron partition
these bounds are actually satisfied.
In the following section, we highlight two characteristic partitions based on physical intuition. First, we look into a proton-neutron partition. Moreover, we also discuss bipartitions formed by states with opposite values of m. In both cases, we can compute the corresponding entropy bounds and compare whether shell-model simulations are close to saturating them.
§ RESULTS
We study the entanglement properties of selected beryllium, oxygen, neon and calcium nuclei, all of which have
an even number of nucleons. The ground states of all
these nuclei have J=M=0, as defined in Eq. (<ref>). We use this symmetry to build the many-body basis, including only Slater determinants with M=0.
We employ the Cohen-Kurath interaction in the p shell <cit.>, USDB in the sd shell <cit.> and KB3G in the pf shell <cit.>.
§.§ Single-orbital entanglement
We start our discussion quantifying the single-particle
entanglement in different isotopes.
The single-orbital entanglement entropy S_i, defined in Eq. (<ref>), is a direct reflection of the single-orbital occupation number. In turn, this is intertwined with the subshell energy structure and the number of valence nucleons.
The entanglement between two sets of modes depends in the first place on the single-orbital entanglement, which becomes the dominant factor whenever there is a large energy difference between subshells, the so-called subshell closures. For instance, in the pf shell, N=28 is a magic number. As discussed above, completely occupied or empty states are directly linked to near-zero single-particles entropies.
Consequently, isotopes with a number of neutrons equal to the number of orbitals in the lowest-energy subshells may have less entanglement entropy, while those
with half-filled subshells have much more potential to be entangled. Let us provide an illustrative example using Ca isotopes.
In ^44Ca, with 4 valence neutrons in the pf shell, the lowest and degenerate orbitals of the 0f_7/2 subshell have an occupation number γ_0f_7/2=0.477 and, in consequence, almost maximal single-orbital entanglement S_0f_7/2=0.998. In contrast, the modes in the remaining subshells,
1p_3/2, 1p_1/2 and 0f_5/2,
are mostly empty, with occupations γ_1p_3/2=0.023, γ_1p_1/2=0.013, γ_0f_5/2=0.022. All these modes have low single-particle entropies, S_i<0.2.
These entanglement properties are in stark contrast to those of ^50Ca, which has 6 more neutrons. Here, the orbital occupations for each subshell are γ_0f_7/2=0.972, γ_1p_3/2=0.465, γ_1p_1/2=0.091, and γ_0f_7/2=0.031. The 0f_7/2 single-particle entropy is now substantially lower than in ^44Ca,
with S_0f_7/2=0.184, whereas the 0p_3/2 states are almost maximally entangled, with S_0p_3/2=0.996.
That is, the 1p_3/2 subshell shows the largest entanglement, as expected from a naive filling of the pf shell. Actually, N=32 is also a magic number in Ca <cit.>.
We observe similar patterns for all the nuclei studied in this work. Table <ref> lists the corresponding single-particle entropies of different states for all isotopes. All the results point to maximal single-particle entropies appearing in mid-subshell isotopes.
Single-orbital entropy sets a bound for how much orbitals in a particular subshell can contribute to multi-orbital entanglement. Let us stress that while S_i provides a measure of how much an orbital is entangled with the rest of the modes, it does not specify with which part of the nucleus it is entangled, nor distinguishes between single-particle states in each subshell with different angular-momentum projections, m.
§.§ Mutual information
A more general picture of the entanglement structure of the nucleus is given by the mutual information matrix, S_ij. Figure <ref> shows the mutual information between all pairs of single-particle orbitals, (i,j), for ^8,10,12Be.
The leftmost panel illustrates the structure of the mutual information matrix by explicitly labelling each orbital with the convention shown in Fig. <ref>.
We organise the orbitals in neutron and proton blocks,
with black solid lines
separating proton-proton (bottom-left), proton-neutron (top-left), neutron-proton (bottom-right),
and neutron-neutron (top-right) correlations.
Proton-proton and neutron-neutron mutual information is colored in red and blue, respectively,
while the proton-neutron and neutron-proton sectors are shown in a purple colour scale.
The scale is the same for all isotopes and blocks within each isotope, with darker shades implying larger S_ij values.
The subshell structure of each proton-proton and neutron-neutron block is illustrated by black dashed lines which separate subshells. In the p shell, these correspond to the 0p_3/2 and 0p_1/2 subsells.
Finally, within each subshell,
the orbitals are sorted by the third component of angular momentum, m,
following the notation of Fig. <ref>.
The leftmost panel of Fig. <ref> corresponds to ^8Be, with 2 protons and 2 neutrons in
each of the 6-orbital valence spaces.
^8Be shows a relatively low mutual information in all orbitals, although the like-particle mutual information is more prominent than the corresponding neutron-proton values.
The central panel focuses on ^10Be, which has the largest neutron-neutron entanglement among the three isotopes.
In the rightmost panel, for ^12Be, neutrons completely fill the valence space and proton-neutron, neutron-proton,
and neutron-neutron entanglement is trivially zero.
Proton-proton entanglement grows with the neutron excess, though,
and the proton-proton sector of ^12Be shows the largest
mutual information values of all three isotopes. In contrast, proton-neutron entanglement is relatively low or zero for all three isotopes in comparison with the like-particle entanglement.
The mutual information results for both ^10Be and
^12Be show a prominent feature that is shared by many of the other nuclei we study. Specifically, we find that the mutual information is largest for orbitals with opposite angular-momentum
projection m.
These orbitals correspond to the diagonals in each subshell within the proton-proton and neutron-neutron sectors. We find that these diagonals are notably darker than the rest of the matrix, indicating larger entanglement among these specific partitions.
The patterns that we have identified so far, namely the relation of S_ij with occupation numbers;
the relatively large mutual information among orbitals with opposite m;
and the increasing proton-proton entanglement with neutron excess, are
even more evident in neon isotopes.
Figure <ref> shows the mutual information for
^20-28Ne,
using the same structure explained in the first panel of Fig. <ref>.
The values of S_ij for each isotope can again be roughly understood in terms of a naive filling of the three subshells in the sd shell, 0d_5/2, 1s_1/2, and 0d_3/2,
with 6, 2 and 4 single-particle orbitals, respectively.
For ^20Ne (leftmost panel), the largest neutron-neutron correlations appear in the lowest subshell, while for ^28Ne (rightmost panel),
with the two lowest subshells mostly full, the largest
neutron-neutron mutual information is among the 0d_3/2 states.
Just as in beryllium, proton-proton entanglement in neon also
increases notably with neutron number. Indeed, Figure <ref> shows that the bottom-left blocks, corresponding to proton 0d_5/2 states, become darker as the number of valence neutrons increases.
Similarly to what was observed in beryllium isotopes, proton-neutron correlations in neon are almost negligible in comparison with like-particle correlations.
Within each subshell, neon isotopes present the largest correlation among orbitals with opposite m, for both the proton-proton and neutron-neutron sectors.
The only exception is the 0d_3/2 neutron subshell in ^28Ne (top right panel), where all orbitals present relatively similar and large entanglement.
We continue our analysis by focusing on two additional isotopic chains.
Figure <ref> shows the mutual information
for ^18-26O (top row) and ^42-50Ca (bottom row). These nuclei contain only valence neutrons in the sd and pf shells, respectively.
In these cases, the entanglement of the opposite-m partitions is even more clear than for beryllium and neon, as shown by the strong diagonals appearing in each subshell block. These diagonals clearly stand out above the rest of the correlations.
As discussed earlier, the entanglement in each subshell depends strongly on the
number of valence neutrons.
For the lightest oxygen isotopes, ^18O and ^20O, with 2 and 4 valence neutrons,
the 6 orbitals in the
0d_5/2 shell are roughly half filled and present large orbital-orbital entanglement.
Likewise, the 0f_7/2 orbitals in
^42Ca, ^44Ca, and ^46Ca, show substantial mutual information.
In contrast, ^24O has the 0d_5/2 and 1s_1/2 subshells mostly filled, and the remaining valence orbitals are mostly empty. Consequently, the mutual information across all orbitals is small.
Equivalently, ^48Ca, with 8 valence neutrons, has a mostly full 0f_7/2 subshell. ^24O and ^48Ca thus present
low mutual information in all subshells, as expected from the single-orbital entanglement values S_i in Table <ref>. In addition, we observe that ^48Ca shows the lowest mutual information computed in this work, in agreement with the nuclear-structure viewpoint as this nucleus is double magic.
The heaviest isotope studied in these two isotopic chains,
^26O and ^50Ca, correspond again
to half-full subshells in a naive shell model ordering.
For ^26O, this is the 0d_3/2 subshell (top-right block), whereas for ^50Ca
it is the 1p_3/2 subshell (second antidiagonal block).
Similarly to ^28Ne, these two isotopes present large mutual information across the whole half-filled j=3/2 subshell (see the darker blocks in the two rightmost panels of Fig. <ref>).
Overall, the mutual information shows two important features.
First, entanglement is largest between orbitals with opposite m.
This is to be expected from nucleon-nucleon pairing correlations, as the interaction enhances the formation of isovector nucleon pairs which are coupled to total J_12=0, or equivalently, m_1+m_2=0 <cit.>. Previous studies including pairing correlations in quantum simulations have been performed on the Agassi model <cit.>.
Second, the entanglement between proton and neutron orbitals is notably low in comparison with like-particle orbitals, as previously observed in Ref. <cit.>.
Furthermore, as the number of excess neutrons increases, proton-neutron entanglement
diminishes while protons become more entangled among themselves.
We only observe subtle hints of proton-neutron entanglement in cases with nearly the same number of protons and neutrons <cit.>, such as ^8Be or ^20Ne.
§.§ Equipartition entanglement
The mutual information studied in section <ref> provides a global picture of the entanglement structure of different nuclei.
This analysis, however, is restricted to local, orbital-orbital correlations.
To understand whether these features translate into low or high entanglement among
all proton and neutron orbitals (S_pn), or among all m<0 with all m>0 modes (S_m), we additionally compute
the von Neumann entropies for these two specific equipartitions.
Table <ref> collects the values of S_pn and S_m for all nuclei studied in this work.
We compare these entanglement measures to their potential maximum values determined by the Fock subspace, as discussed in Sec. <ref>, and show the relative entropies S_pn/S_pn^(max) and S_m/S_m^(max)
in parenthesis.
We find S_pn<2 for all beryllium and neon isotopes,
which corresponds to less than half of the maximum bound for S_pn.
The entanglement between all proton and neutron orbitals is indeed low, both in absolute and relative value, compared to the corresponding maximum. Importantly, this proton-neutron entanglement measure decreases with neutron excess.
In contrast, the values of S_m are relatively
large for all nuclei. In particular, for light nuclei, S_m is close to saturating the bound.
This is to be expected from the mutual information values of Figs. <ref>,
<ref>, and <ref>. The isotopic dependence of S_m is richer than that of S_pn. In particular, it reflects the corresponding subshell closures of isotopes like ^22O, ^24O and ^48Ca.
It is also interesting to quantify how the entanglement of the proton-neutron partition compares to the entanglement of all the other partitions.
To this end, we compute the entropies for all possible equipartitions for ^8Be and ^10Be,
consisting of 12 single-particle orbitals in the p shell.
This implies a total of
1/2126=462
equipartitions for these isotopes.
Figure <ref> shows a histogram representing the distribution of all the von Neumann entropies associated to all these partitions.
There are several remarkable properties in this plot that happen to be relatively robust across all the other isotopic chains.
The equipartition histogram is asymmetric, akin to a skewed normal distribution, with a sharp decay past the maximum.
We show the bin corresponding to S_pn in a different colour (blue), to highlight the fact that this is the lowest
of all possible equipartition entropies in both nuclei.
We also emphasize the partition of m<0 and m>0 orbitals, using a red histogram bar in the two panels of Fig. <ref>.
For ^8Be, as discussed in Sec. <ref>, the maximum possible entropy is S ≈ 3.9.
The von Neumann entropy for the opposite m partition falls, for the two isotopes, in the bar at the very right of the histogram. This indicates that the opposite m partition presents
almost maximal entanglement.
Finally, we find a significant isotopic dependence on the von Neumann entropy distribution of equipartitions. We find a general shift when going from
^8Be (top panel)
to ^10Be (bottom panel).
We note that this difference is unique to
equipartition entanglement. It is not, for instance,
observed in the mutual information plots of Fig. <ref>,
where S_ij is larger for ^10Be than for ^8Be.
In fact, if we compute the average values of S_ij,
with i≠ j, we obtain ⟨ S⟩_ij=0.029 for ^8Be and ⟨ S⟩_ij=0.043 for ^10Be.
These are in contrast to the mean entropies obtained from the average of the data in
Fig. <ref>. These are reported
in column 4 of Table <ref>.
We indeed find that the average entropy decreases
from a value of S=3.84 in ^8Be
to S=3.04 in ^10Be.
We conclude that
a nucleus can have more entanglement localized in specific orbitals than another one, and yet have an overall smaller multi-orbital entanglement.
Figure <ref> shows the corresponding equipartition entropy distributions for the oxygen isotopic chain, from ^18O to ^26O.
In this case the valence spaces consist of only neutron orbitals, so there is no proton-neutron partition.
There is a total of 1/2126=462 available equipartitions.
The oxygen entropy distributions present more structure than those of beryllium.
For ^18O (top panel), the distribution has some gaps, and the largest entropy bin is also the most populated.
^20O has the largest equipartition entanglement, as measured by the mean value reported it Table <ref>. It also has the broadest distribution, as quantified by means of the standard deviation, shown in column 5 of Table <ref>. The largest standard deviation across the oxygen isotopic chain is indeed the one associated with ^20O.
As the neutron number increases past ^20O, the distribution changes in shape and structure. The mean entropy decreases for ^22O and is at its lowest in ^24O. This is consistent with the top panels of Fig. <ref>,
and expected from the 0d_5/2 and 1s_1/2 subshell closures.
Moreover, the distributions for these isotopes have a significantly lower standard deviation.
Beyond the subshell closure, ^26O shows a broader distribution, with a two-peak structure and an overall larger mean.
In oxygen, the particular equipartitions corresponding to m<0 and m>0 orbitals show, again, an almost maximal von Neumann entropy. This can be clearly identified in the plots, where the red bars tend to appear at the right of the histograms.
To study whether the proton-neutron partition provides the lowest entanglement
among all equipartitions in the sd shell, we show in Fig. <ref> the von Neumann entropy distributions for ^20-28Ne.
Neon isotopes have a total of 24 orbitals, resulting in a total of
1/22412=1.4· 10^6
equipartitions.
We take a random sample of 1% of all these equipartitions to generate the results in the figure.
As in Fig. <ref>, the proton-neutron
entropy, marked in blue,
appears well separated from the rest of the distribution in all neon isotopes.
This highlights again the uniqueness of this partition.
The overall shape of the distribution for neon isotopes is more reminiscent of a Gaussian, although with a sharp cutoff at high entropies.
The mean and the standard deviation of the distribution increases when going from ^20Ne to ^22Ne, just as it did with the oxygen isotopes (see Table <ref>). Beyond this point, the mean of the distribution steadily decreases with neutron number, even past the 0d_5/2 subshell closure.
In fact, the distributions for ^20Ne (top panel) and ^28Ne (bottom panel) barely overlap.
In each of these isotopes, the equipartition between opposite m orbitals belongs to a histogram (marked in red) that
falls roughly at the peak of the distribution and closely follows the corresponding means.
This is in stark contrast to the entropy of this very same partition for beryllium and oxygen, where the same equipartition sat close to or at the bin with highest entropy.
For brevity purposes, we do not report on the distributions of calcium isotopes. These follow a similar shape than those of oxygen isotopes. They also present gaps in the spectrum,
and a shift in their mean entropy in correspondence with the bottom panels of Fig. <ref>. In this case, the maximum mean entropy peaks at ^44Ca, with S=2.87. This isotope also presents the broadest distribution, with σ_s=0.47. On the ohter hand, ^48Ca has the lowest mean entropy and standard deviation, with
S=0.91 and
σ_s=0.06.
Let us finally discuss the overall behaviour of the means and standard deviations of these von Neumman entropy
distributions.
These are presented in the fourth and fifth
columns of Table <ref>, including also values for the calcium isotopic chain.
These statistical metrics provide a quantitative measure of non-local entanglement for each nucleus.
In general, we find that these numbers correlate with the results we discuss in previous subsections, and with general nuclear structure wisdom.
Beryllium isotopes, in the p shell, show a decrease of von Neumann mean entropy and standard deviation with neutron excess.
For semimagic oxygen and calcium, with closed-shell protons, the mean von Neumann equipartition entropy is largest in mid-subshell isotopes like ^20O and ^44Ca. As the neutron number increases past this point, the von Neumann entropy decreases as it reaches the corresponding subshell closure isotopes, ^22O, ^24O and ^48Ca, being minimum in the last two nuclei. This is not only true for the central values, but also for the standard deviations, which peak around the midshell maximum and are the smallest in the corresponding subshell closures.
These subshell structures are more difficult to ascertain in the neon isotopic chain. This is naively expected from a nuclear structure point of view, since correlations smear the corresponding neutron and proton single-particle structures.
§ CONCLUSIONS
In this work, we analyze entanglement features in the nuclear shell model, with focus on Be, O, Ne and Ca isotopes. We use different metrics to quantify the importance of entanglement, including single-orbital entropies, orbital-orbital mutual information, and the von Neumann entropies between two equipartitions of the valence space.
In all cases, we find that the entanglement properties are sensitive to the nuclear structure and depend, in some cases strongly, on the (valence) neutron and proton numbers.
Nonetheless, different entanglement metrics reflect different correlation features within the system.
Single-orbital entanglement depends
strongly on the energy, angular momentum, and isospin of the corresponding orbitals.
It is mainly a reflection of the evolution of the single-particle occupation numbers, which is relatively well understood based on nuclear structure insights.
Orbitals with either very small or very large occupation numbers, however, can only have a limited contribution to many-orbital entanglement, as computed
with the mutual information or equipartition entropies. This is consistent with the discussion in section <ref> on how the allowed many-body states limit the construction of states following Eq. <ref>.
In general, we find that mutual information gives a good overall picture of the entanglement
structure. Mutual information displays two key explicit features across the p, sd and pf shells. First, there is an extremely low proton-neutron entanglement, compared to like-particle entanglement. Second, the proton-proton and neutron-neutron pairs with the largest mutual information are those with the same single-particle energy, but opposite third-component of the total angular momentum, m.
These two features are not unique to the mutual information metric, but turn out to be relatively generic. We see these reflected, for instance, in the distribution of von Neumann entropies corresponding to all the possible equipartitions in the system. In all cases studied so far, we find that the proton-neutron partition presents the lowest entanglement.
Moreover, we find that, for all available measures, the proton-neutron entanglement decreases with neutron excess.
This indicates that, in order to simulate separately two halves of the valence space, the optimal choice
is to split this space in terms of the isospin projection, t_z.
This is in agreement with and extends previous findings <cit.>.
Opposite m partitions, in contrast, are close to the maximum allowed entropies. For most of the isotopes studied here, we find that the opposite m partition is more than 50 % of the maximum bound imposed by the dimension of the Fock space.
These results showcase future possible avenues of work.
First, on the nuclear structure side, these very same techniques could be employed for odd nuclei, whose nuclear structure is not so much driven by nuclear pairing compared to even-even systems. It would also be interesting to analyze the nuclear structure of the same nuclei studied in this work within the no-core shell model, and within shell-model calculations based on ab-initio effective Hamiltonians. It would be particularly interesting to see if, and how, entanglement measures can identify the apperance of a core and a valence space. In addition, a comparison to the findings of the present work would illuminate the structure of the additional nuclear correlations captured by ab-initio frameworks.
Second, on the entanglement quantification front,
one may use other entanglement measures, like n-tangles, to give a further insight into the topic. This is particularly relevant in relation to multipartite entanglement in fermionic systems <cit.>.
Finally, it would be interesting to exploit our findings in practical circuit simulations, a task we aim to undertake in the near future. In particular, we plan to exploit low entanglement partitions to build independent quantum circuits that allow for accurate, yet less resource-intensive, results.
Such concrete circuit proposals may also lead to new performance comparisons between different fermionic encodings, as some partitions may only be unambiguously possible in specific encodings. More interestingly, they may pave the way for more efficient circuit designs to study atomic nuclei across the nuclear chart with quantum simulations.
This work is financially supported by
the Ministry of Economic Affairs and Digital Transformation of the Spanish Government through the QUANTUM ENIA project call - Quantum Spain project,
by the European Union through the Recovery, Transformation and Resilience Plan - NextGenerationEU within the framework of the Digital Spain 2026 Agenda,
by grants PID2020-118758GB-I00 and PID2020-114626GB-I00
funded by MCIN/AEI/10.13039/5011 00011033;
by the “Ramón y Cajal" grants RYC-2017-22781 and RYC2018-026072 funded by MCIN/AEI /10.13039/50110 0011033 and FSE “El FSE invierte en tu futuro”;
by the “Unit of Excellence María de Maeztu 2020-2023” award to the Institute of Cosmos Sciences, Grant CEX2019-000918-M funded by MCIN/AEI/10.13039/501100011033;
and by the Generalitat de Catalunya, grant 2021SGR01095.
|
http://arxiv.org/abs/2307.07267v1 | 20230714104634 | Random Wheeler Automata | [
"Ruben Becker",
"Davide Cenzato",
"Sung-Hwan Kim",
"Bojana Kodric",
"Riccardo Maso",
"Nicola Prezza"
] | cs.DS | [
"cs.DS"
] |
Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation
Asif Hanif1 Muzammal Naseer1 Salman Khan1 Mubarak Shah2
Fahad Shahbaz Khan1,3
August 12, 2023
====================================================================================
Wheeler automata were introduced in 2017 as a tool to generalize existing indexing and compression techniques based on the Burrows-Wheeler transform.
Intuitively, an automaton is said to be Wheeler if there exists a total order on its states reflecting the natural co-lexicographic order of the strings labeling the automaton's paths; this property makes it possible to represent the automaton's topology in a constant number of bits per transition, as well as efficiently solving pattern matching queries on its accepted regular language.
After their introduction, Wheeler automata have been the subject of a prolific line of research, both from the algorithmic and language-theoretic points of view. A recurring issue faced in these studies is the lack of large datasets of Wheeler automata on which the developed algorithms and theories could be tested.
One possible way to overcome this issue is to generate random Wheeler automata.
Motivated by this observation of practical nature, in this paper we initiate the theoretical study of random Wheeler automata, focusing our attention on the deterministic case (Wheeler DFAs — WDFAs). We start by naturally extending the Erdős-Rényi random graph model to WDFAs, and proceed by providing an algorithm generating uniform WDFAs according to this model.
Our algorithm generates a uniform WDFA with
n states, m transitions, and alphabet's cardinality σ
in O(m) expected time (O(mlog m) worst-case time w.h.p.) and constant working space for all alphabets of size σ≤ m/ln m.
The output WDFA is streamed directly to the output.
As a by-product, we also give formulas for the number of distinct WDFAs and obtain that nσ + (n - σ) logσ bits are necessary and sufficient to encode a WDFA with n states and alphabet of size σ, up to an additive Θ(n) term.
We present an implementation of our algorithm and show that it is extremely fast in practice, with a throughput of over 8 million transitions per second.
§ INTRODUCTION
Wheeler automata were introduced by Gagie et al. in <cit.> in an attempt to unify existing indexing and compression techniques based on the Burrows-Wheeler transform <cit.>. An automaton is said to be Wheeler if there exists a total order of its states such that (i) states reached by transitions bearing different labels are sorted according to the underlying total alphabet's order, and (ii) states reached by transitions bearing the same label are sorted according to their predecessors (i.e. the order propagates forward, following pairs of equally-labeled transitions).
Equivalently, these axioms imply that states are sorted according to the co-lexicographic order of the strings labeling the automaton's paths.
Since their introduction, Wheeler automata have been the subject of a prolific line of research, both from the algorithmic <cit.> and language-theoretic <cit.> points of view.
The reason for the success of Wheeler automata lies in the fact that their total state order enables simultaneously to index the automaton for pattern matching queries and to represent the automaton's topology using just O(1) bits per transition (as opposed to the general case, requiring a logarithmic number of bits per transition).
A recurring issue faced in research works on Wheeler automata is the lack of datasets of (large) Wheeler automata on which the developed algorithms and theories could be tested. As customary in these cases, a viable solution to this issue is to randomly generate the desired combinatorial structure, following a suitable distribution. The most natural distribution, the uniform one, represents a good choice in several contexts and can be used as a starting point to shed light on the combinatorial objects under consideration; the case of random graphs generated using the Erdős-Rényi random graph model <cit.> is an illuminating example.
In the case of Wheeler automata, we are aware of only one work addressing their random generation: the WGT suite <cit.>. This random generator, however, does not guarantee a uniform distribution over the set of all Wheeler automata.
§.§ Our contributions
Motivated by the lack of formal results in this area, in this paper we initiate the theoretical study of random Wheeler automata,
focusing our attention on the algorithmic generation of uniform deterministic Wheeler DFAs (WDFAs).
We start by extending the Erdős-Rényi random graph model
to WDFAs: our uniform distribution is defined over the set 𝒟_n, m, σ of all Wheeler DFAs over the effective alphabet (i.e. all labels appear on some edge) [σ] = {1,…, σ},
with n states [n], m transitions, and Wheeler order 1 < 2 < … < n.
We observe that, since any WDFA can be encoded using O(nσ) bits <cit.>, the cardinality of 𝒟_n, m, σ is at most 2^O(nσ). On the other hand, the number of DFAs with n states over alphabet of size σ is 2^Θ(nσlog n) <cit.>. As a result, a simple rejection sampling strategy that uniformly generates DFAs until finding a WDFA (checking the Wheeler property takes linear time on DFAs <cit.>) would take expected exponential time to terminate.
To improve over this naive solution, we start by defining a new combinatorial characterization of WDFAs: in Section <ref>, we establish a bijection that associates every element of 𝒟_n, m, σ to a pair formed by a binary matrix and a binary vector.
This allows us to design an algorithm to uniformly sample WDFAs, based on the above-mentioned representation.
Remarkably, our sampler uses constant working space and streams the sampled WDFA directly to output:
There is an algorithm to generate a uniform WDFA from 𝒟_n, m, σ in O(m) expected time (O(mlog m) worst-case time with high probability) using O(1) words of working space, for all alphabets of size σ≤ m/ln m. The output WDFA is directly streamed to the output as a set of labeled edges.
As a by-product of our combinatorial characterization of WDFAs, in Theorem <ref>
we give an exact formula for the number |𝒟_n, m, σ| of distinct WDFAs with n nodes and m edges labeled from alphabet [σ] and in Theorem <ref> we give a tight asymptotic formula for the number |𝒟_n, σ| of distinct WDFAs with n nodes and any number of edges labeled from [σ], obtaining that nσ + (n - σ) logσ bits are necessary and sufficient to encode WDFAs from such a family up to an additive Θ(n) term.
We conclude by presenting an implementation of our algorithm, publicly available at <https://github.com/regindex/Wheeler-DFA-generation>, and showing that it is very fast in practice while using a negligible (constant) amount of working space.
§ PRELIMINARIES AND PROBLEM STATEMENT
ln x and log x indicate the natural logarithm and the logarithm in base 2 of x, respectively.
For an integer k∈ℕ^+, we let [k] denote the set of all integers from 1 to k. For a bit-vector x∈{0,1}^k, we denote with x = ∑_i∈ [k] x_i the L_1-norm of x, i.e., the number of set bits in x. For an integer ℓ≤ k, we denote with x[1:ℓ] the bit-vector (x_1, …, x_ℓ) consisting only of the first ℓ bits of x.
For a bit-matrix A∈{0, 1}^ℓ× k and a column index j∈ [k], we denote the j'th column of A by A_j
and the element at row i and column j as A_i,j. We let A = ∑_i∈ [k], j∈ [ℓ] A_i,j be the L_1,1-norm of A, which again counts the number set bits in A.
For a bit-vector x∈{0,1}^k, we use the notation (v, i) to denote the number of occurrences of 1 in v[1:i]. For completeness, we let (x, 0) = 0. We generalize this function also to matrices as follows. For a bit-matrix A∈{0, 1}^ℓ× k, we let (A, (i, j))=∑_r∈ [j-1](A_r, ℓ) + (A_j, i). We sometimes write bit-vectors from {0,1}^k in string form, i.e., as a sequence of k bits.
In this paper, we are concerned with deterministic finite automata.
A Determinisitic Finite Automaton (DFA) D is a triple (Q, Σ, δ) where Q=[n] is a finite set of n states with 1∈ Q being the source state, Σ=[σ] is the finite alphabet of size σ, and δ:Q×Σ→ Q is a transition function containing m transitions.
We omit to specify the final states of DFAs since they do not play a role in the context of our problem.
We use the shorthand δ_j(v) for δ(v, j). Furthermore, we write δ^out(v):={δ_j(v): j∈Σ} for the set of all out-neighbors of a state v∈ Q and δ^in(v):={u∈ Q: ∃ j∈Σ with v∈δ_j(u)} for the set of all in-neighbors of v. We assume DFAs to have non-zero in-degree for exactly the non-source states, i.e., δ^in(v)≠∅ if and only if v>1;
This choice simplifies our exposition and it is not restrictive from the point of view of the languages accepted by such DFAs.
We do not require the transition function δ to be complete; This choice is motivated by the fact that requiring completeness restricts the class of Wheeler DFAs <cit.>. Furthermore, we do not require DFAs to be connected; Also this choice is customary as it allows, for instance, to use our WDFA sampler to empirically study properties such as connectivity phase transition thresholds.
We say that the alphabet Σ is effective if and only if (∀ j∈Σ)(∃ u,v∈ Q)(δ_j(u)=v), i.e. if every character of Σ labels at least one transition.
We assume that the alphabet Σ=[σ] is totally ordered according to the standard order among integers.
Wheeler DFAs constitute a special class of DFAs that can be stored compactly and indexed efficiently due to an underlying order on the states: the Wheeler order (see Definition <ref>). As said in Definition <ref>, in this paper, the states Q of an automaton D are represented by the integer set [n] for some positive integer n; note that in the following definition, we use the order on integers < to denote the Wheeler order on the states.
A Wheeler DFA (WDFA)
is a DFA D such that < is a Wheeler order, i.e. for a,a'∈Σ, u,v,u',v'∈ Q:
* If u' = δ_a(u), v' = δ_a'(v), and a ≺ a', then u' < v'.
* If u' = δ_a(u)≠δ_a(v) = v' and u < v, then u' < v'.
We note that the source axiom present in <cit.>, which requires that the source state is first in the order, vanishes in our case as the ordering < on the integers directly implies that the source state is ordered first. Notice that property (<ref>) in Definition <ref> implies that a WDFA is input-consistent, i.e., all in-going transitions to a given state have the same label.
With 𝒟_n, m, σ we denote the set of all Wheeler DFAs with effective alphabet Σ=[σ], n states Q=[n], m transitions, and Wheeler order 1 < 2 < … < n.
Clearly, 𝒟_n, m, σ is a subset of the set 𝒜_n, m, σ of all finite (possibly non-deterministic) automata over the ordered alphabet [σ] with n states [n] and m transitions.
In this paper we investigate the following algorithmic problem:
For given n, m, and σ, generate an element from 𝒟_n, m, σ uniformly at random.
Note that, since in Definition <ref> we require 1 < 2 < … < n to be the Wheeler order, Problem <ref> is equivalent to that of uniformly generating pairs formed by a Wheeler DFA D and a valid Wheeler order for the states Q=[n] of D, not necessarily equal to the integer order 1<2<… < n.
Throughout the whole paper, we assume that n-1 ≤ m ≤ nσ and σ≤ n - 1 (due to input consistency), as otherwise 𝒟_n, m, σ=∅ and the problem is trivial.
§ AN ALGORITHM FOR UNIFORMLY GENERATING WDFAS
Our strategy towards solving Problem <ref> efficiently is to associate every element D from 𝒟_n, m, σ to exactly one pair (O, I) of elements from 𝒪_n, σ, m×ℐ_m, n (see Definition <ref> below) via a function r: 𝒟_n, m, σ→𝒪_n, σ, m×ℐ_m, n (“r” stands for representation). Formally, the two sets appearing in the co-domain of r are given in the following definition.
Let
𝒪_n, σ, m
:= { O ∈{0,1}^n×σ : O=m and O_j≥ 1 for all j∈ [σ]} and
ℐ_m, n
:= { I ∈{0,1}^m : I = n - 1}.
The intuition behind the two sets 𝒪_n, σ, m and ℐ_m, n is straightforward: their elements encode the outgoing labels and the in-degrees of the automaton's states, respectively. In order to describe more precisely this intuition, let us fix an automaton D = (Q, δ, Σ) ∈𝒟_n, m, σ and consider its image r(D) = (O, I)∈𝒪_n, σ, m×ℐ_m, n (see Figures <ref> and <ref> for an illustration):
* The matrix O is an encoding of the labels of the out-transitions of D. A 1-bit in position O_u, j means that there is an out-going transition from state u labeled j.
Formally,
O_u, j :=
1 if ∃ v: v=δ_j(u)
0 otherwise.
* The vector I is a concise encoding of the in-degrees of all states. It is defined as
I:=(
1, 0, …, 0_|δ^in(2)|,
1, 0, …, 0_|δ^in(3)|, …,
1, 0, …, 0_|δ^in(n)|),
i.e, for all states i other than the source (that has no in-transitions), the vector contains exactly one 1-bit followed by |δ^in(i)| - 1 0-bits.
Let us proceed with two remarks.
As O=m there are m transitions in total. As O_j≥ 1 for all j∈ [σ], the alphabet is effective, i.e., every character labels at least one transition.
The vector I does not encode the letter on which a transition is in-going to a given state. Notice however that as D is a WDFA all these transitions have to be labeled with the same letter and we can reconstruct this letter for a given I once we know the total number of transitions labeled with each letter. This is because property (<ref>) of Definition <ref> guarantees that the node order is such that the source state (that has no in-going transitions) is ordered first followed by nodes whose in-transitions are labeled with character 1, followed by nodes with in-transitions labeled with character 2, etc. The information on how many transitions are labeled with each character is carried by the matrix O for which r(D)=(O, I).
Let (O, I) be a pair from the image of r, i.e, r(D)=(O, I) for some D. Then it will always be the case that I is contained in a subset ℐ_O of ℐ_m,n that can be defined as follows.
For a matrix O∈𝒪_n, σ, m, let
ℐ_O:= { I∈ℐ_m,n: I_1 + ∑_k=1^j - 1O_k = 1 for all j∈ [σ] }.
Using our running example of Figure <ref>, the bits I_1 + ∑_k=1^j - 1O_k that we force to be equal to 1 are those highlighted in bold, i.e. I_1 and I_3:
noting that bits in I correspond to edges,
those bits correspond to the leftmost edge labeled with a given character j (for any j∈Σ).
This leads us to define the following subset of 𝒪_n, σ, m×ℐ_m,n:
ℛ_n, m, σ := {(O, I): O∈𝒪_n, σ, m and I∈ℐ_O}.
Based on the above definition, we can prove:
For any D ∈𝒟_n, m, σ, r(D) ∈ℛ_n, m, σ.
Note that the integers ∑_k=1^j - 1O_k for j∈ [σ] correspond to the number of edges labeled with letters 1, …, j - 1, hence the positions 1 + ∑_k=1^j - 1O_k correspond to
a change of letter in the sorted (by destination node) list of edges. Recalling that WDFAs are input-consistent (i.e., all in-transitions of a given node carry the same label) and that nodes are ordered by their in-transition letters, positions 1 + ∑_k=1^j - 1O_k for j∈ [σ] in I must necessarily correspond to the first edge of a node, hence they must contain a set bit.
The co-domain of the function r can thus be restricted, and the function's signature can be redefined, as follows: r:𝒟_n, m, σ→ℛ_n, m, σ.
After describing this association of a WDFA D∈𝒟_n, m, σ to a (unique) pair r(D)=(O, I) ∈ℛ_n, m, σ,
we will argue that
function r is indeed a bijection from 𝒟_n, m, σ to ℛ_n, m, σ. It will follow that one way of generating elements from 𝒟_n, m, σ is to generate elements from ℛ_n, m, σ: this will lead us to an efficient algorithm to uniformly sample WDFAs from 𝒟_n, m, σ, as well to a formula for the cardinality of 𝒟_n, m, σ.
§.§ The Basic WDFA Sampler
Our overall approach is to (1) uniformly sample a matrix O from 𝒪_n, σ, m using Algorithm <ref>, then (2) uniformly sample a vector I from ℐ_O using Algorithm <ref> with input O, and finally (3) build a WDFA D using O and I as input via Algorithm <ref>.
We summarize this procedure in Algorithm <ref>.
A crucial point in our correctness analysis (Section <ref>) will be to show that uniformly sampling from 𝒪_n, σ, m and ℐ_O does indeed lead to a uniform WDFA from 𝒟_n, m, σ (besides the bijectivity of r, intuitively, this is because |ℐ_O|=|ℐ_O'| for any O,O'∈𝒪_n, σ, m).
As source of randomness, our algorithm uses a black-box shuffler algorithm: given a bit-vector B∈{0,1}^*, function 𝚜𝚑𝚞𝚏𝚏𝚕𝚎(B) returns a random permutation of B.
To improve readability, in this subsection we start by describing a preliminary simplified version of our algorithm which does not assume any particular representation for the matrix-bit-vector pair (O,I) ∈ℛ_n, m, σ, nor a particular shuffling algorithm (for now, we only require the shuffling algorithm to permute uniformly its input).
By employing a particular sequential shuffler, in Subsection <ref> we then show that
we can generate a sparse representation of O and I on-the-fly, thereby achieving constant working space and linear expected running time.
Out-transition Matrix.
In order to sample the matrix O from 𝒪_n, σ, m,
in addition to function we assume
a function _k, ℓ that takes a vector x of dimension k·ℓ and outputs a matrix A of dimension k×ℓ with the j'th column A_j being the portion x_(j - 1)· k + 1, …, x_j· k
of x. The algorithm to uniformly generate O from 𝒪_n, σ, m then simply samples a bit vector of length nσ with exactly m 1-bits, shuffles it uniformly, reshapes it to be a matrix of dimension n×σ and repeats these steps until a matrix is found with at least one 1-bit in each column (rejection sampling).
Looking at the running example of Figures <ref> and <ref>, the shuffler is called as (1^60^4). In this particular example, this bit-sequence is permuted as 0100110111 by function . Function _n, σ converts this bit-sequence into the matrix O depicted in Figure <ref>, left.
In-transition Vector.
In order to generate the vector I from ℐ_m, n, we proceed as follows. The algorithm takes O as input and generates a uniform random element from the set ℐ_O
by first creating a “mask” that is a vector of the correct length m and contains σ 1-bits at the points 1 + ∑_k=1^j - 1O_k for j∈[σ]. These are the points in I where the character of the corresponding transition changes and hence, by the input-consistency condition, also the state has to change. The remaining m - σ positions in the mask are filled with the wildcard character #. We then give this mask vector as the first argument to a function that replaces the m - σ positions that contain the wildcard character # with the characters in the second argument (in order). Formally, the function takes two vectors as arguments a and b with the condition that a contains |b| times the # character and |a| - |b| times a 1-bit. The function then returns a vector c that satisfies c_i = 1 whenever a_i = 1 and c_i = b_i - (a, i) otherwise, i.e., when a_i=#.
Going back to our running example of Figures <ref> and <ref>, we have mask = 1#1### (that is, all bits but the bold ones in the right part of Figure <ref> are masked with a wildcard). The shuffler is called as (1100) and, in this particular example, returns the shuffled bit-vector 0101. Finally, function is called as (1#1###,0101) and returns the bit-vector I=101101 depicted in the right part of Figure <ref>.
Building the WDFA.
After sampling O and I, the remaining step is to build the output DFA D. This is formalized in Algorithm <ref>. By iterating over all non-zero elements in O, we construct the transition function δ, as follows. The i'th non-zero entry in O corresponds to an in-transition at state (I, i) + 1 (we keep a counter named v corresponding to this rank). The origin state of this transition is the row in which we find the i'th 1 in O when reading the matrix column by column. The column itself corresponds to the letter with which this transition is labeled.
§.§ Constant-Space WDFA Sampler
Notice that our Algorithm <ref> accesses the matrix O and the bit-vector I in a sequential fashion: O is accessed column-wise
and I from its first to last position. Based on this observation, we now show how our WDFA sampler can be modified to use constant working space. The high-level idea is to generate on-the-fly the positions of non-zero entries of O and I in increasing order.
In order to achieve this, we employ the
sequential shuffler described in
<cit.>. Given two integers N and n, function 𝚒𝚗𝚒𝚝_𝚜𝚎𝚚𝚞𝚎𝚗𝚝𝚒𝚊𝚕_𝚜𝚑𝚞𝚏𝚏𝚕𝚎𝚛(N, n) returns an iterator S that can be used (with a stack-like interface) to extract n uniform integers without replacement from the set [N], in ascending order and using a constant number of words of working space (that is, the random integers are generated on-the-fly upon request, from the smallest to the largest). More specifically, function S.𝚙𝚘𝚙() returns the next sampled integer, while S.𝚎𝚖𝚙𝚝𝚢() returns true if and only if all n integers have been extracted.
The pseudocode of our constant-space algorithm and its detailed description can be found in Appendix <ref>, Algorithm <ref>. Below, we proceed with an example to introduce the reader to the overall idea behind the algorithm.
To understand how the sequential shuffler is used in Algorithm <ref>, refer again to the running example of Figures <ref> and <ref>. In Algorithm <ref> at Line <ref>,
the sequential shuffler S_O is initialized as S_O := 𝚒𝚗𝚒𝚝_𝚜𝚎𝚚𝚞𝚎𝚗𝚝𝚒𝚊𝚕_𝚜𝚑𝚞𝚏𝚏𝚕𝚎𝚛(nσ = 10, m=6), i.e. the iterator S_O returns 6 uniform integers without replacement from the set {1,2,…, 10}. In this particular example,
function () called on iterator S_O returns the following integers, in this order: 2,5,6,8,9,10. Using the formula at Line <ref> of Algorithm <ref>, these integers are converted to the matrix coordinates (2,1), (5,1), (1,2), (3,2), (4,2), (5,2), i.e. precisely the nonzero coordinates of matrix O in Figure <ref>, sorted first by column and then by row.
Using the same running example, the sequential shuffler S_I is initialized in Line <ref> of Algorithm <ref> as S_I := 𝚒𝚗𝚒𝚝_𝚜𝚎𝚚𝚞𝚎𝚗𝚝𝚒𝚊𝚕_𝚜𝚑𝚞𝚏𝚏𝚕𝚎𝚛(m-σ = 4,n-σ - 1 = 2), i.e. the iterator S_I returns two uniform integers without replacement from the set {1,2,3,4}. In this particular example,
function () called on iterator S_I returns the following integers, in this order: 2,4. Using the notation of the previous subsection, this sequence has the following interpretation: the 2-nd and 4-th occurrences of # of our mask 1#1### used in Algorithm <ref> have to be replaced with a bit 1, while the others with a bit 0. After this replacement, the mask becomes 101101, i.e. precisely bit-vector I of Figure <ref>.
At this point, the remaining components of Algorithm <ref> are devoted to simulate
Algorithm <ref>, using as input the two sequences of random pairs/integers extracted from S_O and S_I, respectively, as described above. As a matter of fact, the loops in lines 5-7 of Algorithm <ref> correspond precisely to extracting the pairs (u,j) from S_O, and the check at line 8 of Algorithm <ref>, together with the increment of i at Line 10, corresponds to extracting the integers i' from S_I. The rejection sampling mechanism (loop at Lines 1-3 of Algorithm <ref>) is simulated in Algorithm <ref>
by re-starting the algorithm whenever the column j of the current pair (u,j) is either larger by more than one unit than the column j_prev of the previously-extracted pair (i.e. O_j_prev+1=0, Line <ref>), or if the last pair extracted from S_O is such that j is not the σ-th column (i.e. O_σ=0, Line <ref>).
§ ANALYSIS
§.§ Correctness, Completeness and Uniformity
Being Algorithm <ref> functionally equivalent to Algorithm <ref> (the only relevant difference between the two being the employed data structures to represent matrix O and bit-vector I), for ease of explanation in this section we focus on analyzing the
correctness (the algorithm generates only elements from 𝒟_n, m, σ), completeness (any element from 𝒟_n, m, σ can be generated by the algorithm) and uniformity (all D∈𝒟_n, m, σ have the same probability to be generated by the algorithm) of Algorithm <ref>.
These properties then automatically hold on Algorithm <ref> as well.
We start with a simple lemma. The lemma says the following: Assume that r(D)=(O, I) and O contains a 1 in position (u,j), meaning that there is a transition leaving state u, labeled with letter j. Then this out-transition is the i=(O, (u, j)) bit that is set to 1 in O and hence the entry in I corresponding to this transition can be found at I[i]. The state to which this transition is in-going is exactly the number of 1s in I up to this point, i.e., (I, i), plus one (the offset is due to the source having no in-transitions).
Let D∈𝒟_n, m, σ and let r(D)=(O, I). If O_u, j = 1 and (O, (u, j))=i, then δ(u, j)=(I, i) + 1.
First, notice that since O_u,j=1, it is clear that there is an outgoing transition from u labeled j. Furthermore, since (O, (u, j))=i, we know that this transition corresponds to the i-th entry in I. Now, by the definition of I, it follows that the destination state of the considered transition is v=(I,i)+1.
Algorithm <ref> is a deterministic algorithm and thus describes a function, say f, from the set of its possible inputs to the set of its possible outputs. The set of its possible inputs, i.e., the domain of f, is exactly ℛ_n, m, σ. The algorithm's output is certainly a finite automaton, i.e., the co-domain of f is 𝒜_n, m, σ. We will in fact show that the range of f is exactly 𝒟_n, m, σ. We will do so by showing that f is actually an inverse of r, more precisely we show that (1) r is surjective and (2) f is a left-inverse of r (and thus r is injective).
Surjectivity of r.
We start with proving that r is surjective.
It holds that r:𝒟_n, m, σ→ℛ_n, m, σ is surjective.
Fix an element (O, I)∈ℛ_n, m, σ, i.e., an O∈𝒪_n, σ, m and I∈ℐ_O. We now construct an automaton D=(Q, Σ, δ) and then show that r(D)=(O, I). We let Q=[n], Σ=[σ], and
δ = {((u, j), v): O_u, j = 1 and v = (I, (O, (u, j))) + 1}.
Let r(D)=(O', I') and let us proceed by showing that O=O' and I=I'. Recall the definition of r, see Equations (<ref>) and (<ref>). It is immediate that O'=O given the definition of O' and δ. In order to show that I'=I, first note that I'=∏_i=2^n10^|δ^in(i)|. Then, consider the following relation between I and δ^in(i) for any state i∈[n], which uses the definition of δ and Lemma <ref>:
|δ^in(i)|
= |{(u, j)∈ [n]×[σ] : O_u,j = 1 and (I, (O, (u, j))) + 1 = i}|
= |{k∈ [m] : k = (O, (u, j)) for some (u,j)∈ [n]×[σ] with O_u,j=1
and (I, k) + 1 = i}|
= max{k∈ [m] : (I, k)=i-1}
- max{k∈ [m] : (I, k)=i-2}.
Now, recall that I∈ℐ_O, hence the first bit in I is 1. Using the previous equality, it now follows that the second 1-bit in I is at position |δ^in(2)| + 1. By using this equality another n-3 times, we obtain that the first ∑_i=2^n - 1|δ^in(i)| + 1 positions of I are equal to (10^|δ^in(2)| - 110^|δ^in(3)| - 1… 10^|δ^in(n-1)| - 1) 1 and thus agree with I'. It remains to observe that this portion of I already contains n-1 bits that are equal to 1 and thus the remaining bits have to be zero-bits as I∈ℐ. Hence, I = I' and this completes the proof.
Injectivity of r via Left-inverse f.
In order to establish that f is the inverse of r, it remains to prove that f is a left-inverse of r (which implies that r is injective).
The function f is a left-inverse of r, i.e., f(r(D))=D for any D∈𝒟_n, m, σ.
Let D=(Q, Σ, δ)∈𝒟_n, m, σ and let (O, I) = r(D). We have to show that D'=f(O, I), i.e., the automaton D'=(Q', Σ', δ') output by Algorithm <ref> on input (n, m, σ, O, I) is equal to D. Notice that clearly Q=Q'=[n] and Σ = [σ]. It remains to show that δ=δ'. It is clear that Algorithm <ref> adds m transitions to δ', one in each of the m=O iterations. It thus remains to prove that each such transition ((u, j), v) added in some iteration i is contained in δ. Firstly, as O_u, j=1 it is clear that D has an outgoing transition at state u with letter j, second it is clear that the algorithm maintains the property that v=(I, i) + 1 and thus due to Lemma <ref> it holds that δ(u, j)=v and thus this transition is also contained in δ.
We can thus denote the function f with r^-1.
Function r:𝒟_n, m, σ→ℛ_n, m, σ is bijective.
The above lemma has several consequences. First, it shows that the output of Algorithm <ref> is always a WDFA.
Second, as the function r is bijective, this
means that generating uniform pairs from the range of r results in a uniform distribution of WDFAs from 𝒟_n, m, σ.
Algorithm <ref> on input n, m, σ generates uniformly distributed WDFAs from 𝒟_n, m, σ.
In the light of r being a bijection and Algorithm <ref> implementing the function r^-1, it remains to argue that the statements O:= (n, m , σ) and
I:= (O) from Algorithm <ref> in fact generate uniformly distributed pairs from the domain of r^-1, i.e., from ℛ_n, m, σ. It is clear that (n, m , σ) results in a uniformly distributed element O from 𝒪_n, σ, m and that (O) results in a uniformly distributed element I from ℐ_O. It thus remains to observe that |ℐ_O| is identical for all O∈𝒪_n, σ, m, namely |ℐ_O| = m - σn - σ -1 for all O∈𝒪_n, σ, m. This completes the proof.
§.§ Run-time and Space
We now analyze the number of iterations of Algorithm <ref>, that is, the expected number of rejections before extracting a bit-matrix O with O_j>0 for all j∈[σ]. Algorithm <ref> is clearly equivalent to Algorithm <ref> also under this aspect, since at Lines <ref> and <ref> we reject and re-start the algorithm whenever we generate a column O_j without non-zero entries.
lemmaanalysis
Assume that m≥σln (e ·σ). The expected number of iterations of Algorithm <ref> (equivalently, rejections of Algorithm <ref>) is at most 1.6. Furthermore, the algorithm terminates after O(log m) iterations with probability at least 1 - m^-c for any constant c>0.
We defer the proof to Appendix <ref>. Now assume that σ≤ m / ln m. This implies that e ·σ≤ m (for m larger than a constant), which together with the initial assumption implies that σln (e ·σ) ≤σln m ≤ m. This is exactly the condition in
Lemma <ref>.
Hence, if σ≤ m/ln m then the expected number of rejections of Algorithm <ref> is O(1) (or O(log m) with high probability).
Our main Theorem <ref> follows from the fact that the sequential shuffler of <cit.> uses constant space, its functions 𝚙𝚘𝚙() and 𝚎𝚖𝚙𝚝𝚢() run in constant time, and the while loop at Line <ref> of Algorithm <ref> runs for at most m iterations (less only in case of rejection) every time Algorithm <ref> is executed.
§ COUNTING WHEELER DFAS
In this section, we use the WDFA characterization of Section <ref> to give an exact formula for the number |𝒟_n, m, σ| of WDFAs with n nodes and m edges on effective alphabet [σ] with Wheeler order 1<2<… < n (proofs of this section are deferred to Appendix <ref>). From our previous results, all we need to do is to compute the cardinalities of 𝒪_n,m,σ and ℐ_O. By an inclusion-exclusion argument, we obtain:
lemmacardinality
|𝒪_n,m,σ| = ∑_j=0^σ (-1)^j σjn(σ-j)m.
From Algorithm <ref>, it is moreover immediate to see that |ℐ_O| = m-σn-σ-1 for all O ∈𝒪_n,m,σ (see also the proof of Lemma <ref>).
Since function r:𝒟_n, m, σ→ℛ_n, m, σ is bijective (Corollary <ref>), we finally obtain an exact formula for the cardinality of 𝒟_n, m, σ:
The number |𝒟_n, m, σ| of WDFAs with set of nodes [n] and m transitions labeled from the effective alphabet [σ], for which 1<2<… < n is a Wheeler order is
|𝒟_n, m, σ| = m-σn-σ-1∑_j=0^σ (-1)^j σjn(σ-j)m.
Using similar techniques, in the case where σ is not arbitrarily close to n, i.e., σ≤ (1-)· n for some constant , we moreover obtain a tight formula for the logarithm of the cardinality of 𝒟_n,σ=⋃_m𝒟_n,m,σ, the set of all Wheeler DFAs with n states over effective alphabet [σ] and Wheeler order 1<2<…<n:
theoremWDFAbounds
The following bounds hold:
* log |𝒟_n, σ| ≥ nσ + (n - σ) logσ - (n + logσ), for any n and σ≤ n-1, and
* log |𝒟_n, σ|
≤ nσ + (n - σ) logσ + O(n), for any n≥ 2/ and σ≤ (1 - )· n, where is any desired constant such that ∈ (0, 1/2].
Note that log |𝒟_n,σ| is the information-theoretic worst-case number of bits necessary (and sufficient) to encode a WDFA from 𝒟_n,σ. Our Theorem <ref> states that, up to an additive Θ(n) number of bits, this value is of nσ + (n - σ) logσ bits. As a matter of fact, our encoding r(D)=(O,I) of Section <ref>, opportunely represented using succinct bitvectors <cit.>, achieves this bound up additive lower-order terms and supports efficient navigation of the transition relation. We will discuss such a data structure in an extended version of this paper.
§ IMPLEMENTATION
We implemented our uniform WDFA sampler in and made it available at <https://github.com/regindex/Wheeler-DFA-generation>. We tested our implementation by generating WDFAs on a broad range of parameters, using 56
different combinations of n (number of states), m (number of transitions), and σ (alphabet's cardinality): n ∈{10^6 · 2^i: i=0,… ,6 }, m ∈{n · 2^i - 1: i=0,… ,7 }
and σ = 128.
In order to understand the impact that streaming to disk has on the running time of our algorithm, we tested two versions of our code: in the first case, we streamed the resulting Wheeler DFA to disk (SSD), while in the second we streamed it to a pre-allocated vector residing in internal memory.
Clearly, constant working space is achieved only in the first case.
Our experiments were run on a server with Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz with 8 cores, 128 gigabytes of RAM, 512 gigabytes of SSD, running Ubuntu 18.04 LTS 64-bit. Working space was measured with (Resident set size).
Figure <ref> shows the running time of both variants (left: streaming to SSD; right: streaming to RAM). Both versions exhibit a linear running time behavior, albeit with a different multiplicative constant. The algorithm storing the WDFA in internal memory is between 1.2 and 1.7 times faster than the version streaming the WDFA to the disk (the relatively small difference is due to the fact that we used an SSD). In particular, we measured a throughput of at least 5.466.897 and 7.525.794 edges per second for the two variants, respectively.
It is also worth mentioning that in our experiments we never observed a rejection: this can be explained with the fact that we used an alphabet size σ≪ m (thereby making it extremely likely to generate bit-matrices O containing at least one set bit in each column).
As far as space usage is concerned, the version streaming the WDFA to disk always used about 4 MB of internal memory, independently from the input size (this memory is always required to load the libraries).
This confirms the constant space usage of our algorithm, also experimentally.
As expected, the space usage of the version that streams the WDFA to internal memory is linear with the input's size. Nevertheless, both algorithms are extremely fast in practice: in these experiments, the largest automaton consisting of 64 million states and more than 8 billion edges was generated in about 15 and 10 minutes with the first and second variant, respectively.
§ DEFERRED MATERIAL FOR SECTION <REF>
§.§ Algorithm <ref> description
We now describe Algorithm <ref>.
To understand how the algorithm works, it is useful to refer to the mask employed in Algorithm <ref>:
Algorithm <ref>
iterates, using variable i, over the ranks (i.e. i-th occurrence) of characters # (wildcards) in the mask.
Variable i', on the other hand, stores the rank of the next wildcard # that is replaced with a set bit by the shuffler; the values of i' are extracted from the shuffler S_I.
The idea is that, whenever i=i', we are looking at a bit set in bit-vector I (which here is not stored explicitly, unlikely in Algorithm <ref>) and thus we have to move to the next destination state v. This procedure simulates exactly what is happening in Algorithm <ref> at Lines 8-10.
The iteration (column-wise) over all non-zero entries of matrix O is simulated by the extraction of values from the shuffler I_O (one value per iteration of the while loop at Line <ref>): each such value t extracted at Line <ref> is converted to a pair (u,j) at Line <ref>. Variables j and prev_j store the columns of the current and previously-extracted non-zero entries of O, respectively. If j>prev_j+1, then it means that column number prev_j+1 has been skipped by the shuffler, i.e. O_prev_j+1 does not contain non-zero entries. In this case, we reject and start the sampler from scratch (Line <ref>; note that we need to clear the output stream — for example, erase the output file — before re-initializing the algorithm). If, on the other hand, j=prev_j+1 (Line <ref>), then the current non-zero entry of O belongs to the next column with respect to the previously-extracted non-zero entry; this means that the character labeling incoming transitions changes and we need therefore to move to the next destination node by increasing v:=v+1 (Line <ref>). Note that in this case we do not increment i, since the new destination node v is the first having incoming label j and thus it does not correspond to a character # in the mask.
Variable i gets incremented only if j=prev_j: this happens at Line <ref>.
The other case in which we need to move to the next destination node (v:=v+1) is when j = prev_j and i=i' (Line <ref>). In such a case, in addition to incrementing v we also need to extract from the shuffler S_I the rank i' of the next mask character # that is replaced with a set bit (Line <ref>). After all these operations, we write the current transition ((u,j),v) to the output stream (Line <ref>).
The last two lines of Algorithm <ref> check if the last visited column of matrix O is indeed O_σ. If not, O_σ=0 and we need to reject and re-start the algorithm.
§ DEFERRED MATERIAL FOR SECTION <REF>
§.§ Deferred Proofs
*
Consider a fixed iteration of the repeat loop and a fixed column j∈ [σ] of the matrix O. Our goal is to bound the probability that O_j=0. Let us denote this event with E_j. Consider the following urn experiment, equivalent to our setting. An urn contains n white and (σ - 1)· n black balls, we draw m balls without replacement (in our case, the n white balls correspond to the n cells of O_j). Let F_i, for i∈ [m] denote the event that we do not draw a white ball in the i'th draw. It holds that [E_j]=[⋂_i∈ [m]F_i], the probability that we did not draw a white ball in any of the m trials (i.e. this is the probability that none of the cells of O_j is selected and gets a 1-bit, after inserting m 1-bits in O at random positions without replacement).
This probability can be expressed as
[⋂_i∈ [m]F_i]
= ∏_i∈ [m][F_i | ⋂_k = 1^i - 1 F_k ]
= ∏_i∈ [m](σ - 1) n - (i - 1)/σ n - (i - 1)
= ∏_i∈ [m]( 1 - n/nσ - (i - 1))
≤( 1 - 1/σ)^m≤( 1 - 1/σ)^σln (e·σ)≤1/e·σ.
Now using a union bound over all σ columns, we get that the probability that there exists j∈[σ] such that O_j=0 is upper bounded by 1/e. It follows that the expected number of repetitions until it holds that O_j=0 for all j∈[σ] is 1 / (1 - 1 / e) ≤ 1.6. This shows the bound on the expected number of iterations. Now fix some constant c>0 and assume that the algorithm's repeat loop runs for T = cln m iterations. We then get that the probability that in each of these loop iterations, there exists a column j∈[σ] such that O_j=0, is at most (1/e)^cln m = m^-c. Hence, the algorithm terminates after at most cln m iterations with probability at least 1 - m^-c.
§ DEFERRED MATERIAL FOR SECTION <REF>
§.§ Deferred Proofs
*
Elements of 𝒪_n,m,σ are n×σ binary matrices with non-zero columns. For j∈[σ], let us denote by α_j ⊆{0,1}^n×σ
the set of all n×σ binary matrices O
such that O_j = 0.
We have that
|𝒪_n,m,σ|=nσm - |⋃_j=1^σα_j|.
Using the inclusion-exclusion principle, we obtain
|⋃_j=1^σα_j|
= ∑_j=1^σ|α_j| - ∑_1≤ i<j≤σ|α_i∩α_j|
+ ∑_1≤ i<j<k≤σ| α_i∩α_j∩α_k |
- … + (-1)^σ-1| α_1∩α_2∩…∩α_σ|
= ∑_j=1^σ (-1)^j-1σjn(σ - j)m.
Therefore
|𝒪_n,m,σ|=nσm-∑_j=1^σ (-1)^j-1σjn(σ - j)m = ∑_j=0^σ (-1)^j σjn(σ -j)m
and our claim follows.
We can re-formulate Theorem <ref>
by lifting the requirement that the alphabet Σ is effective. In order to do so, it is sufficient to fix as effective alphabet any nonempty subset of Σ. We immediately obtain:
Let 𝒟̂_n, m, σ denote the set of all WDFAs with set of nodes [n] and m transitions labeled from a fixed totally-ordered alphabet Σ of cardinality σ, for which 1<2<… < n is a Wheeler order. Then:
|𝒟̂_n, m, σ| =
∑_k=1^σσkm-kn-k-1∑_j=0^k (-1)^j kjn(k-j)m.
*
(1 — Lower bound): Fix m with n-1≤ m≤ nσ and consider the set 𝒟'_n,m,σ⊆𝒟_n,m,σ of all WDFAs over the size-σ alphabet with n states and m transitions and the source state having σ outgoing transitions. Let us denote ν_m:= |𝒟'_n,m,σ|.
With our representation, these WDFAs have the only restriction that the first row of the out-matrix contains only 1s.
Obviously, such an out-matrix has always at least one 1-bit in each column.
Using our representation, we can thus obtain that the number of such WDFAs is
ν_m
=nσ-σm-σm-σn-σ-1
=(nσ-σ)!/(m-σ)!(nσ-m)!(m-σ)!/(n-σ-1)!(m-n+1)!
= (nσ-σ)!/(nσ-m)!(m-n+1)!(n-1-σ)!.
Now consider the following three-variate polynomial
f(x, y, z) = (x+y+z)^nσ-σ
= ∑_(i, j, k)∈ [n σ - σ]^3: i + j + k = n σ - σα_i, j, k· x^i y^j z^k.
Clearly, α_nσ-m, m-(n-1), (n-1)-σ=ν_m by the multinomial theorem. Now consider the univariate polynomial f(1, 1, z) = (2+z)^nσ-σ = ∑_k = 0^nσ - σβ_k z^k and observe that
β_n - 1 - σ = ∑_(i, j)∈ [nσ -σ]^2 : i + j = nσ - n + 1α_i, j, n - 1 - σ
= ∑_ℓ = 0^nσ - n + 1α_ℓ, nσ - n + 1 - ℓ, n - 1 - σ
= ∑_m = n - 1^nσα_nσ-m, m-(n-1), (n-1)-σ
= ∑_m = n - 1^nσν_m
On the other hand
β_n - 1 - σ
= nσ-σn-1-σ2^(nσ-σ)-(n-1-σ)
= nσ-σn-1-σ2^nσ-n+1.
We thus obtain (by taking the base-2-logarithm)
log |𝒟_n,σ|
≥log∑_m=n-1^nσν_m
≥ nσ - n + lognσ-σn-1-σ
≥ nσ - n +log(( nσ-σ/n-1-σ)^n-1-σ)
≥ nσ - n + (n - 1 - σ) ·logσ
= nσ + (n - σ) ·logσ - (n + logσ)
(2 — Upper bound): Using our representation of WDFAs by bit-matrices and bit-vectors and the fact that |ℐ_O|≤m - σn - σ - 1, we get
|𝒟_n,σ|
= ∑_m = n - 1^nσ
|𝒟_n,m,σ|
≤∑_m = n - 1^nσ
|𝒪_n, σ, m| ·m - σn - σ - 1
≤nσ - σn - σ - 1·∑_m = n - 1^nσ
|𝒪_n, σ, m|
≤nσ - σn - σ - 1· 2^nσ≤(e (nσ - σ)/n - σ - 1)^n - σ· 2^nσ.
Taking logarithms, we thus get
log |𝒟_n,σ|
≤ n σ + (n - σ)·log(e (nσ - σ)/n - σ - 1)
≤ n σ + (n - σ)·log(e σ n/n - σ - 1)
Using the assumption that σ≤ (1 - ) n for some constant ≤ 1/2, we obtain that the logarithm above can be upper bounded by log (e σ·n/ n - 1). Now assuming that n≥ 1/, it holds that the function f(n)=n/ n - 1 is monotonically decreasing and hence f(n)≤ f(2/)=2/=O(1) for all n≥2/. Altogether,
log |𝒟_n,σ|
≤ n σ + (n - σ) · (logσ + O(1))
= n σ + (n - σ) ·logσ + O(n).
|
http://arxiv.org/abs/2307.03999v1 | 20230708154620 | Transport properties in gapped graphene through magnetic barrier in a laser field | [
"Rachid El Aitouni",
"Miloud Mekkaoui",
"Ahmed Jellal",
"Michael Schreiber"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
[email protected]
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
Canadian Quantum Research Center,
204-3002 32 Ave Vernon, BC V1T 2L7, Canada
Institut für Physik, Technische Universität, D-09107 Chemnitz, Germany
We study the transport properties of Dirac fermions through gapped graphene through a magnetic barrier irradiated by a laser field oscillating in time. We use Floquet theory and the solution of Weber's differential equation to determine the energy spectrum corresponding to the three regions composing the system. The boundary conditions and the transfer matrix approach are employed to explicitly determine the transmission probabilities for multi-energy bands and the associated conductance. As an illustration, we focus only on the three first bands: the central band T_0 (zero photon exchange) and the two first side bands T_±1 (photon emission or absorption). It is found that the laser field activates the process of translation through photon exchange. Furthermore, we show that varying the incident angle and energy gap strongly affects the transmission process. The conductance increases when the number of electrons that cross the barrier increases, namely when there is a significant transmission.
78.67.Wj, 05.40.-a, 05.60.-k, 72.80.Vp
Keywords: Graphene, laser field, magnetic field, energy gap, transmission, Klein effect, conductance.
Transport properties in gapped graphene through magnetic barrier in a laser field
Michael Schreiber
August 12, 2023
===================================================================================
§ INTRODUCTION
Graphene is a two-dimensional carbon-based material that is one atom thick, and has atoms structured in a hexagonal shape like a honeycomb <cit.>. Graphene has incredible properties such as a very high mobility <cit.>, electrons moving with a speed 300 times lower than the speed of light, a good conductivity (minimal in the vicinity of the Dirac points, i.e., always the fermions pass), being flexible <cit.> and being very hard <cit.>.
Due to these properties, graphene is becoming the most used material in the technological industries <cit.>.
It is theoretically studied in the framework of the tight-binding model <cit.> and as a result, the energy spectrum shows a linear dispersion relation. In addition, the energy bands are in contact at six points <cit.>, called Dirac points K (K'), and form cones around them. It is surprising that electrons can pass from the valance band to the conduction band easily without any effect. This lack of excitation energy constitutes, in fact, an obstacle and a challenge for the fabrication of devices based on graphene. Consequently, to control the passage of electrons, an energy gap should be created between the two bands. Several studies have been reported on the subject to overcome such situations, for instance, either by deforming graphene to generate pseudo-magnetic fields that play the role of a real magnetic field <cit.> or by stacking one layer of graphene on the other <cit.>.
On the other hand, fermions confined in graphene under barriers, at normal incidence, can cross them even if their energy is less than the barrier heights, an effect known as the Klein paradox <cit.>.
For an oscillating potential over time, the energy spectrum acquires sub-bands, generating several transmission modes, and each mode corresponds to an energy band <cit.>.
Furthermore, an applied magnetic field to graphene generates a quantized energy spectrum known as Landau levels <cit.>. Combining these with the oscillating potential gives rise to a current density in x- and y-directions <cit.>. When the graphene is irradiated by a time-varying laser field,
subbands emerge in the energy spectrum, and then the barrier exchanges photons with the fermions, generating infinite transmission modes <cit.>. As a consequence, the laser field suppresses the Klein effect, which makes it possible to control the passage of fermions.
We investigate how Dirac fermions can cross a gapped graphene subjected to a magnetic barrier and irradiated by a laser field. Within the framework of Floquet theory <cit.> and by using the solution of Weber's differential equation <cit.>, we will be able to determine the eigenspinors corresponding to each region composing the system. These will be matched at boundaries and mapped in matrix form by applying the matrix transfer approach to finally get the transmission coefficients for all energy bands. Now, with the help of the current density, we derive the transmission probabilities for all modes.
The conductance is also calculated by integrating the total transmission over all incident angles.
Since it is not easy to treat all modes numerically, we limit our study to the first three bands, which are the central band (l=0) and the two first side bands (l =±1). We show that increasing the barrier width, or the incidence energy, decreases the transmissions, which implies that the number of electrons that cross the barrier decreases, consequently, the conductance decreases. On the other hand, when the intensity of the laser field increases, we observe that the transmissions decrease, but they increase as long as its frequency increases. When the barrier width increases, it is found that the resonance peaks appear, and their number increases. Another set of results shows that the transmissions are almost zero when the incidence energy is less than the energy gap, and the Klein paradox is still present.
This paper is organized as follows. In Sec. <ref>, we present the Hamiltonian describing our system and we will solve the eigenvalue equations to determine the wave functions in the three regions. We use the boundary conditions and the matrix formalism to express the transmission probabilities of each band, and we calculate the integral of this total transmission which makes it possible to determine the conductance at zero temperature in Sec. <ref>. We discuss our numerical results in Sec. <ref>. Finally, we conclude our work.
§ THEORETICAL MODEL
We study the behavior of Dirac fermions in a graphene sheet divided into three regions. Regions 1 and 3 contain only pristine graphene, whereas the gapped region 2 of width d is subjected to a perpendicular magnetic field and irradiated by a laser field, as shown in Fig. <ref>.
The present system can be described the following Hamiltonian
H= v_F σ⃗·[p⃗-e/c(A⃗_L(t)+A⃗_B(x))]+Δσ_z
where σ_x,y,z are Pauli matrices, v_F≈ c/300 is the Fermi velocity , p⃗=-iħ(∂/∂ x,∂/∂ y) the momentum operator, e the electronic
charge. The vector potential
A⃗_L(t) of the laser field in the dipole approximation <cit.> is generated by an electric field of amplitude F and frequency ω defined as E(t)=Fsin(ω t), which is given by
A⃗_L(x,y,t)=(0,A_0cos(ω t),0)
with the laser field amplitude A_0=F/ω. For the magnetic field, the vector potential A⃗_B( x) is chosen in the Landau gauge B(0,x,0) and the continuity allows us to write
A⃗_B(x)= {[ 0, x<0; Bx, 0<x<d; Bd, x>d. ].
To determine the eigenspinors Ψ(x,y,t)=(Ψ_1, Ψ_2)^T in the three regions, we solve the eigenvalue equation, with T standing for transpose. In region 2 (0<x<d), we get
ΔΨ_1(x,y,t) + v_F[p_x-i(p_y-eF/ωcos(ω t)-eBx)]Ψ_2(x,y,t)=iħ∂/∂ tΨ_1(x,y,t)
v_F[p_x+i(p_y-eF/ωcos(ω t)-eBx)]Ψ_1(x,y,t)-ΔΨ_2(x,y,t)=iħ∂/∂ tΨ_2(x,y,t)
To proceed further, note that in the framework of the Floquet approximation <cit.>, the oscillation of the laser field over time produces several energy modes in the eigenspinors. As a result, we have
Ψ(x,y,t)=ψ(x,y,t)e^-iEt/ħ
where E is the Floquet quasi-energy, ψ(x,y,t) is a time periodic function satisfying ψ(x,y,t+t_0)=ψ(x,y,t) and t_0 is the time period of the laser field. On the other hand, if the Hamiltonian is invariant along the y-direction, then we write Ψ(x,y,t)=e^ik_yye^-iEt/ħφ(t)(ϕ_1(x),ϕ_2(x))^T, and therefore (<ref>,<ref>) become
v_F[-i∂/∂ x-i(k_y-F/ωcos(ω t)-Bx)]ϕ_2(x)φ(t)e^ik_yye^-iEt = (i∂/∂ t-Δ)ϕ_1(x)φ(t)e^ik_yye^-iEt
v_F[-i∂/∂ x+i(k_y-F/ωcos(ω t)-Bx)]ϕ_1(x)φ(t)e^ik_yye^-iEt = (i∂/∂ t+Δ)ϕ_2(x)φ(t)e^ik_yye^-iEt
in the system unit (ħ=e=c=1). It is straightforward to find
-iF/ωcos(ω t)=∂/∂ tφ(t)
and therefore the temporal component is
φ(t)=e^-iαsin(ω t).
Now, we use the Jacobi–Anger identity e^-iαsin(ω t)=∑_-∞^+∞J_m(α)e^-imω t to write (<ref>,<ref>) as
∂ϕ_2(x)/∂ x-[x/ℓ_B^2-k_y+mϖ]ϕ_2(x)-i (ε+mϖ-δ)ϕ_1(x)=0
∂ϕ_1(x)/∂ x+[x/ℓ_B^2-k_y+mϖ]ϕ_1(x)-i (ε+mϖ+δ)ϕ_2(x)=0
where ℓ_B=1/√(B), ϖ=ω/v_F, F̃=F/v_F, ε=E/v_F and δ=Δ/v_F. From
(<ref>,<ref>), we obtain two new decoupled equations
∂^2ϕ_1(x)/∂ ^2 x+[1/ℓ_B^2-(x/ℓ_B^2-k_y+mϖ)^2+(ε+mϖ)^2-δ^2]ϕ_1(x) = 0
∂^2ϕ_2(x)/∂ ^2 x+[-1/ℓ_B^2-(x/ℓ_B^2-k_y+mϖ)^2+(ε+mϖ)^2+δ^2]ϕ_1(x) = 0.
These can be expressed in terms of the Weber differential equations <cit.> by making the change of variable X_m=√(2)(x/ℓ_B-k_yℓ_B+mϖℓ_B) and setting v_m=(εℓ_B+mϖℓ_B)^2-(δℓ_B)^2/2, to get
d^2ϕ_1,2(X_m)/dX_m^2+[±1/2-X^2_m/4 +v_m]ϕ_1,2(X_m)=0
having the following solutions
ϕ_1(X_m) = A_mD_v_m(X_m)+B_mD_v_m(-X_m)
ϕ_2(X_m) = -i√(2 )/εℓ_B+mϖℓ_B+δℓ_B[ A_mD_v_m+1(X_m)-B_mD_v_m+1(-X_m)]
where A_m, B_m are constant coefficients corresponding to mth side-band, and D_v_m is the parabolic cylinder function. Consequently, the eigenspinors in region 2 take the form
Ψ_2(x,y,t)=e^ik_yy∑_l=-∞^+∞[A_l[ Ξ^+_l(x); η^+_l(x) ]
+B_l[ Ξ^-_l(x); η^-_l(x) ]]∑_m=-∞^+∞J_m(α)e^-i(ε+(l+m)ω)t
and we have defined
Ξ^±(x) = D_v_m(± X_m)
η^±(x) = ∓i√(2)/εℓ_B+m ϖℓ_B+δℓ_B D_v_m+1(± X_m).
In the region 1 (x<0) we have only pristine graphene, and then we can easily obtain the associated eigenspinors and eigenvalues <cit.>
Ψ_1(x,y,t)=e^ik_yy∑_m=-∞^+∞[δ_l,0[ 1; Λ_l ]e^ik_lx+∑_m,l=-∞^+∞r_l[ 1; -Λ^*_l ]e^-ik_lx]δ_m,le^-iv_F( ε+mϖ)t
ε+lϖ=s_l√(k^2_l+k^2_y)
where r_l is the amplitude of the reflected wave corresponding to band l, δ_m,l=J_m-l(α=0), s_l=sgn(v_Fε+lv_Fϖ),
ϕ_l=tan^-1k_y/k_l,
k_l=εcosϕ_l,
k_y=εsinϕ_l and
Λ_l=s_lk_l+ik_y/√(k^2_l+k^2_y)=s_le^iϕ_l.
We can establish
the relation between the incident angles
ϕ_l=arcsin(ε/ε+lϖsin(ϕ_0)).
In region 3 (x>d), the emergent angle ϕ'_l is different than the incident one ϕ_0 because of the continuity of the vector potential. The solution is <cit.>
Ψ_3(x,y,t)=e^ik_yy∑_m,l=-∞^+∞[t_l[ 1; Λ'_l ]e^ik'_lx+b_l[ 1; -Λ'^*_l ]e^-ik'_lx]δ_m,le^-iv_F(ε+mϖ)t
ε+lϖ =s_l√(k_l^'2+(k_y- d/ℓ_B^2)^2)
where t_l is the transmission amplitude of the transmitted wave corresponding to the band l, b_l is a null vector,
ϕ'_l=tan^-1ky- d/ℓ_B^2/k'_l,
k'_l=(ε+lϖ)cosϕ'_l,
k_y=(ε+lϖ)sinϕ'_l+d/ℓ_B^2
and
Λ'_l=s_lk'_l+i(k_y-d/ℓ_B^2)/√(k_l^'2+(k_y-d/ℓ_B^2)^2)=s_le^iϕ'_l.
From the conservation of the momentum k_y, we get the relation
ϕ'_l=arcsin(ε/ε+l ϖsinϕ_0- d/ℓ_B^2/ε+lϖ).
As we will see, the above results can be used to study the transport properties of gapped graphene scattered by a magnetic barrier and irradiated by a laser field. We obtain the transmissions associated with several energy bands and the corresponding conductance.
§ TRANSMISSION PROBABILITIES
We use the continuity of the eigenspinors at x=0 and x =d to
determine the transmission probabilities for the present system. This corresponds to the processes
Ψ_1(0,y,t)=Ψ_2(0,y,t) and Ψ_2(d,y,t)=Ψ_3(d,y,t),
which yields
δ_m,0+r_m=∑_l=-∞^+∞(A_lΞ^+_l(0)+B_lΞ^-_l(0))J_m-l(α)
δ_m,0Λ_m-r_mΛ_m^*=∑_l=-∞^+∞(A_lη^+_l(0)+B_lη^-_l(0))J_m-l(α)
t_me^ik'_md+b_me^-ik'_md=∑_l=-∞^+∞(A_lΞ^+_l(d)+B_lΞ^-_l(d))J_m-l(α)
t_mΛ^'_me^ik'_md-b_mΛ_m^'*e^-ik'_md=∑_l=-∞^+∞(A_lη^+_l(d)+B_lη^-_l(d))J_m-l(α).
We have four equations, but each one has an infinite number of modes, and to solve the problem, we use the transfer matrix approach. As a result, we get
[ Υ_1; Υ'_1 ]
=[ ℕ_1,1 ℕ_1,2; ℕ_2,1 ℕ_2,2 ][ Υ_2; Υ'_2 ]=ℕ[ Υ_2; Υ'_2 ]
with
ℕ=[ 𝕀 𝕀; Γ^+ Γ^-; ]^-1[ 𝕏^+_0 𝕏^-_0; ℝ^+_0 ℝ^-_0 ][ 𝕏^+_d 𝕏^-_d; ℝ^+_d ℝ^-_d ]^-1[ 𝕀 𝕀; Γ'^+ Γ'^-; ][ 𝕂^+ 𝕆; 𝕆 𝕂^-; ]
and
Γ^±=±δ_m,lΛ_l^±1, Γ'^±=±δ_m,lΛ_l^'±1, 𝕏^±_z=Ξ_l^±(z)J_m-l(α), ℝ^±_z=η_l^±(z)J_m-l(α), 𝕂^±=e^± ik'_lLδ_m,l
where 𝕆 is the zero matrix, 𝕀 is the unit matrix and z={0,d}.
In this case, we take into account Dirac fermions traveling from left to right with energy E, and from (<ref>), we obtain
Υ_2=ℕ^-1_1,1Υ_1
with the Kronecker coefficient δ_0,l=Υ_1 and Υ_2=t_l.
Because n and l range from -∞ to +∞ and are challenging to solve, the aforementioned transfer matrix is of infinite order. Due to this, we replace the infinite series with a finite set of terms ranging from -N to N, provided that N≥F/ω^2 <cit.>, resulting in
t_-N+k=ℕ'[k+1,N+1]
where ℕ'=ℕ^-1_11, k=0, 1, 2,⋯ N.
To simplify, we limit our studies only to the central band and the first two side bands l=0,± 1 of energy E± hω having the following transmission coefficients
t_-1=ℕ'[1,2],
t_0=ℕ'[2,2],
t_1=ℕ'[3,2].
On the other hand, the current density is determined from the continuity equation, its expression given by J=e v_F Ψ^*σ_xΨ , therefore the expression of the incident, reflected and transmitted current density given by
J_inc,0=ev_F(Λ_0+Λ^*_0)
J_tra,l=ev_Ft^*_lt_l(Λ'_l+Λ'^*_l)
J_ref,l=ev_Fr^*_lr_l(Λ_l+Λ^*_l)
The relation between the current density and the transmission probability is expressed as T_l=J_tra,l/J_inc,0. Then, after some algebra, we get
T_l=cosϕ'_l/cosϕ_0|t_l|^2
and the total transmission probability is given by summing up over all modes
T=∑_lT_l.
By definition, the conductance at zero temperature is the average of the flux of the fermions on the half Fermi surface <cit.>, on the other hand it is the integration of the total transmission T over k_y <cit.>, given by
G=G_0/2π∫_-k_y^max^k_y^maxT dk_y
where G_0 is the conductance unit.
Using the relation between transverse wave vector k_y and the incident angle ϕ_0 to express G as
G=G_0/2π∫_-π/2^π/2T cosϕ_0dϕ_0.
To investigate and underline the basic features of the present system, we numerically analyze the transport properties based on the transmission channels and associated conductance in the following chapter.
§ RESULTS AND DISCUSSION
We numerically study the transmission probabilities of Dirac fermions in gapped graphene through a magnetic barrier in a laser field. Recall that the oscillation of the barrier over time generates several energy bands, which give rise to transmission channels. Due to the difficulty of analyzing all modes, we will limit ourselves to the first three bands, where the central band T_0 corresponds to zero photon exchange and the first two side bands T_±1 to absorption or emission of photons.
Fig. <ref> shows the transmission probability as a function of the energy εℓ_B for different incident angles. There is transmission if the condition ε >d/ℓ_B^2-l ϖ/1+sinϕ_0
is satisfied, in other words, this quantity plays the role of an effective mass <cit.>. For normal incidence, as depicted in Fig. <ref>, transmission is zero for ε<δ. Due to this condition, resonance peaks appear with decreasing amplitudes along the εℓ_B-axis, that is to say the disappearance of the Fabry-Pérot resonance, which is in agreement with previous results <cit.>. The transmission process with zero photon exchange, T_0, is dominating, and therefore, the majority of the electrons cross the barrier without photon exchange.
Fig. <ref> shows the behavior of T_0 for different incident angles. As a result, in Fig. <ref> it increases
sharply away from the normal incidence. On the other hand, transmission with photon exchange as shown in Figs. <ref>, <ref> there is a decrease for large energy.
We can conclude that the behavior of T_0 changes if we move away from the normal incidence and that
the photon exchange process is suppressed.
Fig. <ref> displays the transmission probability as a function of εℓ_B under a suitable choice of physical parameters. Transmissions appear when condition ε >δ is satisfied. As clearly seen in Fig. <ref>, we observe the dominance of T_0 compared to those corresponding to the first two side bands, and it is almost equal to the total transmission as
found
in <cit.>. Now for different values of F̃ℓ_B^2, we plot T_0 in Fig. <ref>. We see that T_0 decreases with the increase of F̃ℓ_B^2, because the increase in laser field suppresses T_0 as we have already seen <cit.>.
Fig. <ref> displays the effect of field frequency on transmission: increasing the frequency increases T_0.
Fig. <ref> is drawn for different values of barrier width d/ℓ_B. If this increases, resonance peaks appear and their number increases, and the
oscillations get closer. A similar result is obtained in our previous work <cit.>.
Fig. <ref> presents the transmission probabilities as a function of the energy gap δℓ_B.
We show in Fig. <ref> the total transmission probability (magenta line) and those with or without photon exchange.
We distinguish two interesting cases: first, for δℓ_B<6, the Klein effect is very clear and transmission with photon exchange is almost zero, that means that the majority of electrons cross the barrier without photon exchange. Second, for δℓ_B > 6, the transmissions decrease in an oscillatory way until they become zero when δℓ_B is close to εℓ_B=15.
Fig. <ref> displays the total transmission for different values of F̃ℓ_B, and we see that the increase of F̃ℓ_B suppresses the transmission, as has been found in <cit.>. The Klein effect is clear for very small values of F̃ℓ_B and δℓ_B. For F̃ℓ_B=0.3, the Klein effect is observed only for δℓ_B<6, then the transmission decreases in an oscillatory way until the oscillations vanish. If we increase F̃ℓ_B the transmission keeps the same shape with decreasing amplitude, which is in agreement with the results of <cit.>.
Fig. <ref> is similar to the previous one, but here we vary ϖℓ_B. As a result, for ϖℓ_B=1 the Klein effect always exists up to ϖℓ_B=5, then the transmission decreases in an oscillatory way towards zero near εℓ_B. On the other hand, there will be total reflection if the incident energy is lower than the energy gap.
If the frequency decreases, the transmission retains the same shape, but the amplitude decreases. Fig. <ref> shows the effect of the barrier width on the total transmission. We observe that resonance peaks appear when the width increases. For very small widths, the Klein effect is found up to δℓ_B ≈ 6, and then the transmission decreases towards zero. Increasing the width increases the number of oscillations and their amplitudes, as already seen in <cit.>. We summarize that increasing the amplitude of the field suppresses transmission inside the barrier. On the other hand, increasing the frequency increases the transmission, and increasing the width increases the number of oscillations and their amplitude.
Fig. <ref> shows the transmission probabilities as a function of the barrier width d/ℓ_B. In Fig. <ref> we observe that all the transmissions have sinusoidal behavior. The total transmission oscillates in the vicinity of one (Klein paradox). T_0 is predominant and its oscillation amplitude decreases when the width increases. The transmissions with photon exchange also oscillate, but with phase shift, which increases along the d/ℓ_B-axis. For certain values of d/ℓ_B, the transmissions with or without photon exchange are equal.
Fig. <ref> displays transmission with photon emission for different values of the transverse wave vector k_yℓ_B. There is always a sinusoidal behavior with increasing amplitude along the d/ℓ_B-axis. When k_yℓ_B increases, the width of the oscillations decreases.
In Fig. <ref>, we show the effect of the laser field frequency on transmission. We notice that the amplitude and period of oscillations decrease as the frequency increases. Thus, the increase in frequency suppresses the transmissions with photon exchanges.
We vary the intensity of the laser field F̃ℓ_B^2 in Fig. <ref> and observe that the transmission is oscillating with the same period. We notice that the increase in F̃ℓ_B^2 causes an increase in transmission with photon exchange and decreases that of the central band.
In Fig. <ref>, we plot the conductance as a function of the energy εℓ_B. Choosing different values of width d/ℓ_B, Fig. <ref> reveals that the conductance varies almost exponentially for lower values of d/ℓ_B, and oscillates when d/ℓ_B increases.
Fig. <ref> shows the effect of intensity F̃ℓ_B^2 of the laser field on conductance. We observe that conductance increases as F̃ℓ_B^2 increases, but it vanishes when ε→δ.
Fig. <ref> is plotted for different values of frequency ϖℓ_B. We notice that the conductance tends to zero when εℓ_B is close to δℓ_B and the oscillations increase as ϖℓ_B increases.
In Fig. <ref>, we vary δℓ_B to observe that the conductance is always almost zero when ε tends towards δ.
Finally, to increase the conductance, it is necessary to increase the number of electrons crossing the barrier, thereby increasing the transmission. As we have seen, the transmission increases when the incident energy increases or the barrier width decreases, as well as when the intensity of the laser field decreases or its frequency increases.
In Figure <ref>, the conductance is represented as a function of the energy gap δℓ_B. By choosing three values of incident energy in Fig. <ref>, we show that the conductance is maximum at the beginning, then decreases in an oscillatory way towards zero near the value δ =ε. The amplitude increases when incident energy increases as well, exhibiting a behavior similar to transmission as we have seen before.
Fig. <ref> shows the effect of width d/ℓ_B on the conductance. There are always resonance peaks that appear around δℓ_B=3, the number of oscillations increases with the increase of d/ℓ_B. In Figs. <ref> and <ref>, we visualize the effect of the laser field parameters on the conductance. They show that the amplitude of the conductance increases with the increase in frequency, and decreases when the amplitude increases.
§ CONCLUSION
We studied the effect of a gapped magnetic barrier irradiated by a laser field generated by an electric field of amplitude F and frequency ω on Dirac fermions in graphene. We started with the solution of the eigenvalue equations to determine the spinors in the three regions of the gapped sheet. We used the Floquet theory, and the solution of Weber's differential equation to determine the eigenspinors corresponding to each region as combinations of parabolic cylindrical functions. Then we employed the boundary conditions, which give four equations, each equation has infinite modes. To solve them, we used the transfer matrix approach to obtain a matrix of infinite order that is difficult to solve. For simplicity, we focused only on the three first bands, the central band corresponds to l=0 and the two first side bands correspond to l=±1. Lastly, we calculated the integral of the total transmission probability to obtain conduction at zero temperature.
When a barrier oscillates in time, it generates several energy bands, namely the photon exchange between the barrier and the Dirac fermions. Here we found that the transmission process with zero photon exchange is much more important than the process with photon exchange. Klein's paradox is still present, but we can suppress it. As we know, the original Klein effect is only observed for normal incidences (ϕ_0=0), but in this work, this effect is observed for non-normal incidences. When the barrier width is increased, the transmission decreases until it disappears for a critical width, the same thing happens for the conductance. On the other hand, the transmission increases when the incident energy increases. However, to have transmission, it is necessary to satisfy the condition that binds the incident energy to the other barrier parameters: ε >d/ℓ_B^2-l ϖ/1+sinϕ_0. As we know the conductance exists if we have a non-zero transmission, which always implies the verification of this last condition..
9
Novoselov2004
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science 306, 666 (2004).
Novoselov2005
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature 438, 197 (2005).
mobil2
S. Morozov, K. Novoselov, M. Katsnelson, F. Schedin, D. Elias, J. Jaszczak, and A. Geim, Phys. Rev. Lett. 100, 016602
(2008).
mobil
K. I. Bolotin, K. J. Sikes, Z. Jiang, M. Klima, G. Fudenberg, J. Hone, P. Kim, and H. L. Stormer, Solid State Commun.
146, 351 (2008).
flix
C. Lee, X. Wei, J. W. Kysar, and J. Hone, Science 321, 385 (2008).
Beenakker2008
C. W. Beenakker, Rev. Mod. Phys. 80, 1337 (2008).
Bhattacharjee2006
S. Bhattacharjee and K. Sengupta, Phys. Rev. Lett. 97, 217001 (2006).
Bunch2005
J. S. Bunch, Y. Yaish, M. Brink, K. Bolotin, and P. L. McEuen,
Nano Lett. 5, 2887 (2005).
Berger2004
C. Berger, Z. M. Song, T. B. Li, X. B. Li, A. Y. Ogbazghi, R. Feng,
Z. T. Dai, A. N. Marchenkov, E. H. Conrad, P. N. First, and W. A. de Heer, J.
Phys. Chem. B 108, 19912 (2004).
Tight
S. Reich, J. Maultzsch, C. Thomsen, and P. Ordejon, Phys. Rev. B 66, 035412 (2002).
Castro2009
A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
propr
N. M. R. Peres, J. Phys.: Condens. Matter 21, 323201 (2009).
def1
F. Guinea, M. I. Katsnelson, and A. K. Geim, Nat. Phys. 6, 30 (2010).
def4
G.-X. Ni, Y. Zheng, S. Bae, H. R. Kim, A. Pachoud, Y. S. Kim, C.-L. Tan, D. Im, J.-H. Ahn, B. H. Hong, and B. Ozyilmaz, ACS Nano 6, 1158 (2012).
scatring
S. Latil and L. Henrard, Phys. Rev. Lett. 97, 036803 (2006).
Morozov2005
S. V. Morozov, K. S. Novoselov, F. Schedin, D. Jiang, A. A. Firsov, and A. K. Geim, Phys. Rev. B 72, 201401 (2005).
klien2
M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Nat. Phys. 2, 620 (2006).
jellal2014
A. Jellal, M. Mekkaoui, E. B. Choubabi, and H. Bahlouli, Eur. Phys. J. B 87, 123 (2014).
conmagnetic
A. De Martino, L. Dell’Anna, and R. Egger, Phys. Rev. Lett. 98, 066802 (2007).
Landau
F. Xu and L. Zhang, Chin. Phys. B 28, 117403 (2019).
Magnetic2011
M. O. Goerbig, Rev. Mod. Phys. 83, 1193 (2011).
confinementmagnetic
N. Myoung and G. Ihm, Physica E 42, 70 (2009).
Elaitouni2022
R. El Aitouni and A. Jellal, Phys. Lett. A 447, 128288 (2022).
biswas2013
R. Biswas and C. Sinha, Appl. Phys. 114, 183706 (2013).
biswas2012
C. Sinha and R. Biswas, Appl. Phys. Lett. 100, 183107 (2012).
laser2
M. Ahsan Zeb, K. Sabeeh, and M. Tahir, Phys. Rev. B 78, 165420 (2008).
rachid2022
R. El Aitouni, M. Mekkaoui, A. Jellal, Ann. Phys. (Berlin) 535, 2200630 (2023).
floquetappr
Z. Gu, H. A. Fertig, D. P. Arovas, and A. Auerbach, Phys. Rev. Lett. 107, 216601 (2011).
grad
I. S. Gradshteyn, I. M. Ryzhik, Table of Integrals, Series, and Products (Academic Press, Inc. New York, 1980).
approx
R. Loudon, The Quantum Theory of Light (3rd ed, Oxford University Press, New York, 2000).
math
F. W. J. Olver, J. Res. Nat. Bur. Standards Sect. B 63, 131 (1959).
conduct1
X. Chen and J. W. Tao, Appl. Phys. Lett. 94, 262102 (2009).
conduct2
M. R. Masir, P. Vasilopoulos, and F. M. Peeters, Phys.
Rev. B 79, 035409 (2009).
Biswas2021
R. Biswas and C. Sinha, Sci. Rep. 11, 2881 (2021).
biswas2016
R. Biswas, S. Maitty, and C. Sinha, Physica E. 84, 235 (2016).
Mekkoui2021
M. Mekkaoui, A. Jellal, and H. Bahlouli, Solid State Communi.
358, 114981 (2022).
Sergy2011
S. E. Savel’ev and A. S. Alexandrov, Phys. Rev. B 84, 035428 (2011).
MEKKAOUI2018
M. Mekkaoui, R. El Kinani, and A. Jellal, Mater. Res. Expr. 6, 085013 (2019).
Makkoui2015
H. Chnafa, M. Mekkaoui, A. Jellal, and A. Bahaoui, Physica E 148, 115645 (2023).
|
http://arxiv.org/abs/2307.04749v1 | 20230710175457 | Divide, Evaluate, and Refine: Evaluating and Improving Text-to-Image Alignment with Iterative VQA Feedback | [
"Jaskirat Singh",
"Liang Zheng"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.MM",
"stat.ML"
] |
Quantum oscillations with topological phases in a kagome metal CsTi_3Bi_5
Binghai Yan
August 12, 2023
============================================================================
-0.2in
Quantum oscillations with topological phases in a kagome metal CsTi_3Bi_5
Binghai Yan
August 12, 2023
============================================================================
-0.2in
< g r a p h i c s >
-0.05in
type=figure
We propose a training-free decompositional framework which helps both better evaluate (Sec. <ref>) and gradually improve (Sec. <ref>) text-to-image alignment using iterative VQA feedback.
The field of text-conditioned image generation has made unparalleled progress with the recent advent of latent diffusion models. While remarkable, as the complexity of given text input increases, the
state-of-the-art diffusion models may still fail in generating images which accurately convey the semantics of the given prompt.
Furthermore, it has been observed that such misalignments are often left undetected by pretrained multi-modal models such as CLIP.
To address these problems, in this paper we explore a
simple yet effective
decompositional approach towards both evaluation and improvement of text-to-image alignment. In particular, we first introduce a
Decompositional-Alignment-Score which given a complex prompt decomposes it into a set of disjoint assertions.
The alignment of each assertion with generated images is then measured using a VQA model. Finally, alignment scores for different assertions are combined aposteriori to give the final text-to-image alignment score. Experimental analysis reveals that the proposed alignment metric shows significantly higher correlation
with human ratings as opposed to traditional CLIP, BLIP scores.
Furthermore, we also find that the assertion level alignment scores provide a useful feedback which can then be used in a simple iterative procedure to gradually increase the expressivity of different assertions in the final image outputs.
Human user studies indicate that the proposed approach surpasses previous state-of-the-art by 8.7% in overall text-to-image alignment accuracy.
§ INTRODUCTION
The field of text-to-image generation has made significant advancements with the recent advent of large-scale language-image (LLI) models
<cit.>. In particular, text-conditioned latent diffusion models have shown unparalleled success in generating creative imagery corresponding to a diverse range of free-form textual descriptions. However, while remarkable, it has been observed <cit.> that as the complexity of the input text increases, the generated images do not always accurately align with the semantic meaning of the textual prompt.
To facilitate the reliable use of current text-to-image generation models for practical applications,
it is essential to answer two key questions: 1) Can we detect such fine-grain misalignments between the input text and the generated output in a robust manner? and 2) Once detected, can we improve the text-to-image alignment for failure cases?
While several metrics for evaluating text-to-image alignment (, CLIP <cit.>, BLIP <cit.>, BLIP2 <cit.>) exist,
it has been observed <cit.> that a high score with these metrics can be achieved even if the image does not fully correspond with input prompt. For instance, in Fig. <ref>, an output image
(containing only pink trees)
shows high CLIP/BLIP scores with the text “pink trees and yellow car” even if yellow car is not present. Evaluating text-to-image matching using
the
image-text-matching (ITM) head of BLIP models has also been
recently
explored <cit.>. However, the generated scores also show a similar tendency to favor the main subject of input prompt. Furthermore, even if such misalignments are detected, it is not clear how such information can be used for improving the quality of generated image outputs in a reliable manner.
To address these problems, in this paper we explore a simple yet effective decompositional approach towards both evaluation and improvement of fine-grain text-to-image alignment. In particular, we propose a Decompositional-Alignment-Score (DA-Score) which
given a complex text prompt, first decomposes it into a set of disjoint assertions about the content of the prompt. The alignment of each of these assertions with the generated image is then measured using a VQA model <cit.>. Finally, the alignment scores for diffferent assertions are combined to give an overall text-to-image alignment score.
Our experiments reveal that the proposed evaluation score shows significantly higher correlation with human ratings over prior evaluation metrics (, CLIP, BLIP, BLIP2) (Sec. <ref>).
Furthermore, we also find that the assertion-level alignment scores provide a useful and explainable feedback for
determining which parts of the input prompt are not being accurately described in the output image. We show that this feedback can then be used to gradually improve the alignment of the generated images with the input text prompt. To this end, we propose a simple iterative refinement procedure (Fig. <ref>), wherein at each iteration the expressivity of the least-aligned assertion is improved by increasing the weightage/cross-attention strength (refer Sec. <ref>) of corresponding prompt tokens during the reverse diffusion process.
Through both qualitative and quantitative analysis,
we find that the proposed iterative refinement process allows for generation of better aligned image outputs over prior works <cit.> while
on average
showing comparable inference times (Sec. <ref>).
§ RELATED WORK
Text to Image Generation Models. Text conditional image synthesis is a topic of keen interest in the vision community. For instance, <cit.> use GANs to perform text guided image generation. Similarly, <cit.> explore the use of autoregressive models for zero-shot text to image generation.
Recently, diffusion-based-models <cit.> have emerged as a powerful class of methods for performing text-conditional image synthesis over diverse range of target domains.
While remarkable, generating images which align perfectly with the input text-prompt remains a challenging problem <cit.>. To enforce, heavier reliance of generated outputs on the provided text, classifier-free guidance methods <cit.> have been proposed. Similarly, use of an additional guidance input to improve controllability of text-to-image generation have recently been extensively explored <cit.>. However, even with their application, the generated images are often observed to exhibit fine-grain misalignments such as
missing secondary objects <cit.> with the input text prompt.
Evaluating Image-Text Alignment. Various protocols for evaluating text-image alignment in a reference-free manner have been proposed <cit.>. Most prior works <cit.> typically use the cosine similarity between the text and image embedding from large-scale multi-modal models <cit.> such as CLIP <cit.>, BLIP <cit.>, BLIP-2 <cit.> for evaluating the alignment scores. Recently, <cit.> also show the application of BLIP/BLIP-2 models for image-text matching using image retrieval. However,
as shown in Fig. <ref>,
these scores can
give very high scores even if the generated images do not full align with the input text prompt. Furthermore, unlike our approach image-text alignment is often represented through a single scalar value which does not provide an explainable measure which can be used to identify/improve weaknesses of the image generation process.
Improving Image-Text Alignment. Recently several works <cit.> have been proposed to explore the problem of improving image-text alignment in a training free manner. Liu <cit.> propose to modify the reverse diffusion process by composing denoising vectors for different image components.
However, it has been observed <cit.> that it struggles while generating photorealistic compositions of diverse objects. Feng <cit.> use scene graphs to split the input sentence into several noun phrases and then assign a designed attention map to the output of the cross-attention operation.
In another recent work, Chefer <cit.> extend the idea of cross-attention map modification to minimize missing objects but instead do so by modifying the noise latents during the reverse diffusion process. While effective at reducing missing objects, we find that the performance / quality of output images can suffer as the number of subjects in the input prompt increases (refer Sec. <ref>).
Besides training-free methods, recent contemporary work <cit.> has also explored the possibility of improving image-text alignment using human feedback to finetune existing latent diffusion models. However this often requires the collection of large-scale human evaluation scores and finetuning the diffusion model across a range of diverse data modalities which can be expensive. In contrast, we explore a training free approach for improvement of fine-grain text-to-image alignment.
§ OUR METHOD
Given the image generation output ℐ corresponding to a text prompt 𝒫, we wish to develop a mechanism for evaluation and improvement of fine-grain text-to-image alignment.
The core idea of our approach is to take a decompositional strategy for both these tasks.
To this end, we first generate a set of disjoint assertions regarding the content of the input prompt. The alignment of the output image ℐ with each of these assertions is then calculated using a VQA model. Finally, we use the assertion-based-alignment scores as feedback to improve the
expressiveness
of the assertion with the least alignment score. This process can then be performed in an iterative manner to gradually improve the quality of generated outputs
until a desired value for the overall alignment score is attained.
In the next sections, we discuss each of these steps in detail. In Sec. <ref> we first discuss the process for evaluating decompositional-alignment scores. We then discuss the iterative refinement process for improving text-to-image alignment in Sec. <ref>. Fig. <ref> provides an overview for the overall approach.
§.§ Evaluating Text-to-Image Alignment
Prompt Decomposition Model.
Given an input prompt 𝒫, we first decompose its textual information into a set of disjoint assertions (and corresponding questions) which exhaustively cover the contents of the input prompt. Instead of relying on human-inputs as in <cit.>[Prior works on improving image-text alignment often rely on human-user inputs for expressing contents of the input prompt into its simpler constituents. For instance, Feng <cit.> require the user to describe the prompt as a conjunction/disjunction of simpler statements. Similarly, Chefer <cit.> require the user to provide a set of entities / subjects in the prompt, over which their optimization should be performed.],
we leverage the in-context learning capability <cit.> of large-language models <cit.> for predicting such decompositions in an autonomous manner. In particular, given an input prompt 𝒫 and large-language model ℳ, the prompt decomposition is performed using in-context learning as,
𝐱 = {x_0,x_1, … x_n} = ℳ(𝐱|𝒫, D_exempler,𝒯),
where 𝐱 is the model output, n is the number of decompositions, D_exemplar is the in-context learning dataset consisting 4-5 human generated examples for prompt decomposition, and 𝒯 is task description. Please refer supp. material for further details on exemplar-dataset and task-description design.
The model output 𝐱 is predicted to contain tuples x_i = {a_i, p_i}, where each tuple is formatted to contain assertions a_i
and the sub-part p_i of the original prompt 𝒫 corresponding to the generated assertion. For instance, given 𝒫: `a cat and a dog' the prompt decomposition can be written as,
ℳ(𝐱|𝒫: `a cat and a dog', D_exempler,𝒯) = [ {`there is a cat',`a cat'} , {`there is a dog',`a dog'}].
Computing Assertion-based Alignment Scores.
We next compute the alignment of the generated image ℐ with each of the disjoint assertions using a Visual-Question-Answering (VQA) model <cit.>. In particular, given image ℐ, assertions a_i, i=1,… n, their rephrasing in question format a^q_i and VQA-model 𝒱, the assertion-level alignment scores u_i(ℐ, a_i) are computed as,
u_i(ℐ, a_i) = exp(α_i/τ)/exp(α_i/τ) + exp(β_i/τ), where α_i = 𝒱 (`yes'|ℐ, a^q_i), β_i = 𝒱 (`no'|ℐ, a^q_i),
where α_i, β_i refer to the logit-scores of VQA-model 𝒱 for input tuple (image ℐ, question a^q_i) corresponding to output tokens `yes',`no' respectively. Hyperparameter τ controls the temperature of the softmax operation and controls the confidence of the alignment predictions.
Combining Alignment Scores. Finally, the assertion level alignment-scores u_i(ℐ, a_i) are combined to give the overall text-to-image alignment score Ω(ℐ,𝒫) between image ℐ and prompt 𝒫 as,
Ω(ℐ,𝒫) = ∑_i λ_i(𝒫,a_i) u_i(ℐ_k, a_i)/∑_i λ_i(𝒫,a_i),
where weights λ_i(𝒫,a_i) refer to the importance of assertion a_i in capturing the overall content of the input prompt 𝒫, and allows the user to control the relative importance of different assertions in generating the final image output[For simplicity reasons, we mainly use λ_i=1 ∀ i in the main paper. Further analysis on variable λ_i to account for variable information content or visual verifiability of an assertion are provided in supp. material.]. Please refer Fig. <ref> for the overall implementation.
§.§ Improving Text to Image Alignment
In addition to predicting overall text-to-image alignment score, we find that assertion-level alignment scores u_i(ℐ, a_i) also provide a useful and explainable way for determining which parts of the input prompt 𝒫 are not being accurately described in the output image ℐ. This feedback can then be used in an iterative manner to improve the expressivity of the assertion with least alignment score u_i(ℐ, q_i), until a desired threshold for the overall text-image alignment score Ω(ℐ,𝒫) is obtained.
Parameterized Diffusion Model. We first modify the image generation process of standard diffusion models in order to control the expressiveness of different assertions a_i in parametric manner. In particular, we modify the reverse diffusion process to also receive inputs weights w_i, where each w_i controls the relative importance of assertion a_i during the image generation process.
In this paper, we mainly consider the following two methods for obtaining such parametric control.
Prompt Weighting. Instead of computing the CLIP <cit.> features from original prompt 𝒫
we use prompt-weighting <cit.> to modify the input CLIP embeddings to the diffusion model as,
CLIP(𝒫) = 𝒲(𝒫, {CLIP(p_i),w_i}_i=1^n))
where 𝒲 refers to the prompt-weighting function from <cit.>, p_i refers to the sub-prompt (Sec. <ref>) corresponding to assertion a_i, and weights w_i
control the relative weight of different sub-prompts p_i in computing the overall CLIP embedding for prompt 𝒫.
Cross-Attention Control.
Similar to <cit.>, we also explore the idea of modifying the noise latents z_t during the reverse diffusion process, to increase the cross-attention strength of the main noun-subject for each sub-assertion a_i. However, instead of only applying the gradient update for the least dominant subject <cit.>, we modify the loss for the latent update in parametric form as,
z_t = z_t - α∇_z_tℒ(z_t, {w_i}_i=1^n)), ℒ(z_t, {w_i}_i=1^n) = ∑_i w_i (1- max G(𝒜^t_i)),
where α is the step-size, 𝒜^t_i refer to the attention map corresponding to the main noun-subject in assertion a_i, G is a smoothing function and weights w_i control the extent to which the expression of different noun-subjects in the prompt (for each assertion) will be increased in the next iteration.
Iterative Refinement. Given the above parametric formulation for controlling expression of different assertions, we next propose a simple yet effective iterative refinement approach towards improving text-to-image alignment. In particular, at any iteration k ∈ [1,5] during the refinement process, we first compute both overall text-image similarity score Ω(ℐ_k,𝒫) and assertion-level alignment scores u_i(ℐ_k,𝒫).
The image generation output ℐ_k+1 for the next iteration is then computed as,
ℐ_k+1 = 𝒟(𝒫,{w^k+1_i}_i=1^n));
w_i^k+1 =
w_i^k + Δ, if l = argmin_i u_k(ℐ,𝒫)
w_i^k otherwise,
where 𝒟 refers to the parametrized diffusion model and Δ is a hyper-parameter. This iterative process is then performed until a desirable threshold for the overall alignment score Ω(ℐ_k,𝒫) is reached. The image generation output ℐ^⋆ at the end of the refinement process is then computed as,
ℐ^⋆ = argmax_ℐ_kΩ(ℐ_k,𝒫).
§ EXPERIMENTS
Dataset.
Since there are no openly available datasets addressing semantic challenges in text-based image generation with human annotations, we introduce a new benchmark dataset Decomposable-Captions-4k for method comparison. The dataset consists an overall of 24960 human annotations on images generated using all methods <cit.> (including ours) across a diverse set of 4160 input prompts. Each image is a given rating between 1 and 5 (where 1 represents that `image is irrelevant to the prompt' and 5 represents that `image is an accurate match for the prompt').
Furthermore, unlike prior works <cit.> which predominantly analyse the performance on relatively simple prompts with two subjects (object a and object b), we construct a systematically diverse pool of input prompts for better understanding text-to-image alignment across varying complexities in the text prompt. In particular, the prompts for the dataset are designed to encapsulate two axis of complexity: number of subjects and realism. The number of subjects refers to the number of main objects described in the input prompt and varies from 2 (, a cat with a ball) to 5 (, a woman walking her dog on a leash by the beach during sunset). Similarly, the realism of a prompt is defined as the degree to which different concepts naturally co-occur together and varies as easy, medium, hard and very hard. easy typically refers to prompts where concepts are naturally co-occurring together (, a dog in a park) while very hard refers to prompts where concept combination is very rare (, a dog playing a piano). Further details regarding the dataset are provided in supplementary material.
§.§ Evaluating Text-to-Image Alignment
Baselines. We compare the performance of the Decompositional-Alignment Score with prior works on evaluating text-to-image alignment in a reference-free manner. In particular, we show comparisons with CLIP <cit.>, BLIP <cit.> and BLIP2 <cit.> scores where the text-to-image alignment score is computed using the cosine similarity between the corresponding image and text embeddings. We also include comparisons with BLIP-ITM and BLIP2-ITM which directly predict a binary image-text matching score (between 0 and 1) for input prompt and output image.
Finally, we report results on the recently proposed text-to-text (T2T) similarity metric <cit.> which computes image-text similarity as the average cosine similarity between input prompt and captions generated (using BLIP) from the input image.
Quantitative Results. Fig. <ref> shows the correlation between human annotations and predicted text-to-image alignment scores across different metrics on the Decomposable-Captions dataset. We observe that the DA-Score shows a significantly higher correlation with human evaluation ratings as opposed to prior works across varying number of subjects N ∈ [2,5] in the input prompt. We also note that while the recently proposed T2T similarity score <cit.> shows comparable correlation with ours for N=2,
its performance significantly drops as the number of subjects in the input prompt increases.
§.§ Improving Text-to-Image Alignment
In this section, we compare the performance of our iterative refinement approach with prior works on improving text-to-image alignment in a training-free manner. In particular, we show comparisons with 1) Stable Diffusion <cit.>, 2) Composable Diffusion <cit.> 3) StructureDiffusion <cit.> and 4) Attend-and-Excite <cit.>.
All images are generated using the same seed across all methods.
Qualitative Results. Results are shown in Fig. <ref>. As shown, we observe that Composable Diffusion <cit.> struggles to generate photorealistic combinations of objects especially as number of subjects in the prompt increase.
StructureDiffusion <cit.> helps in addressing some missing objects , telescope in example-1, but the generated images tend to be semantically similar to those produced by the original Stable Diffusion model, and thus does not significantly improve text-to-image alignment.
Attend-and-Excite <cit.> shows much better performance in addressing missing objects (, telescope in example-1 and umbrella in example-4). However, as sumamrized in Fig. <ref> we observe that it suffers from 3 main challenges: 1) Object Relationship (Fig. <ref>a): we observe that despite having desired objects, generated images may sometimes fail to convey relationship between them. For , in row-1 Fig. <ref> while output images show both a lion and guitar, the lion does not seem to be playing the guitar. In contrast, Eval-and-Refine is able to describe both presence and relation between objects in a better manner. 2) Overlapping Entities (Fig. <ref>b):
For images with overlapping entities (, person and spacesuit), we observe that Attend-and-Excite <cit.> typically spends most of gradient updates balancing between the overlapping entities, as both entities (person and spacesuit) occupy the same cross-attention region. This can lead to outputs where a) other important aspects (, lake in Col-3) or b) one of the two entities (, spacesuit) are ignored.
3) Prompt Complexity (Fig. <ref>c): Finally, we note that since Attend-and-Excite <cit.> is limited to applying the cross-attention update w.r.t the least dominant subject, as the complexity of input prompt 𝒫 increases, it may miss some objects (, umbrella, beach, sunny day) during the generation process. In contrast, the iterative nature of our approach allows it to keep refining the output image ℐ until a desirable threshold for the overall image-text alignment score Ω(ℐ,𝒫) is reached.
Quantitative Results. In addition to qualitative experiments, we also evaluate the efficacy of our approach using human evaluations. In this regard, we report three metrics: 1) normalized human score: which refers to the average human rating (normalized between 0-1) for images generated on the Decomposable-Captions-4k dataset.
2) accuracy: indicating the percentage of generated images which are considered as an accurate match (rating: 5) for the input text prompt by a human subject.
3) pairwise-preference: where human subjects are shown pair of images generated using our method and prior work, and are supposed to classify each image-pair as a win, loss or tie (win meaning our method is preferred).
For our approach we consider two variants 1) Ours (PW) which performs iterative refinement using only prompt-weighting, and 2) Ours (PW + CA) where iterative refinement is performed using both prompt weighting and introducing cross-attention updates (Sec. <ref>). Pairwise preference scores are reported while using Ours (PW + CA) while comparing with prior works.
Results are shown in Fig. <ref> and Tab. <ref>.
We observe that while the text-to-image alignment accuracy for all methods decreases with an increased difficulty in input text prompts (Fig. <ref>), we find that the our approach with only prompt-weighting is able to consistently perform on-par or better than Attend-and-Excite <cit.>. Further introduction of cross-attention updates (Sec. <ref>), allows our approach to exhibit even better performance, which outperforms Attend-and-Excite <cit.> by 8.67 % in terms of overall alignment accuracy of the generated images. These improvements are also reflected in the pairwise comparisons where human subjects tend to prefer our approach over prior works <cit.>.
Inference time comparison.
Tab. <ref> shows comparison for the average inference time (per image) for our approach with prior works <cit.>. We observe that despite the use of an iterative process for our approach, the overall inference time is comparable with prior works. This occurs because prior works themselves often include additional steps. For instance, Composable-Diffusion <cit.> requires the computation of separate denoising latents for each statement in the confunction/disjunction operation, thereby increasing the overall inference time almost linearly with number of subjects. Similarly, Attend-and-Excite <cit.> includes additional gradient descent steps for modifying cross-attention maps.
Moreover, such an increase is accumulated even if the baseline Stable-Diffusion <cit.> model already generates accurate images. In contrast, the proposed iterative refinement approach is able to adaptively adjust the number of iterations required for the generation process by monitoring the proposed DA- Score for evaluating whether the generation outputs are already good enough.
§ CONCLUSION
In this paper, we explore a simple yet effective decompositional approach for both evaluation and improvement of text-to-image alignment with latent diffusion models. To this end, we first propose a Decompositional-Alignment Score which given a complex prompt breaks it down into a set of disjoint assertions. The alignment of each of these assertions with the generated image is then measured using a VQA model. The assertion-based alignment scores are finally combined to a give an overall text-to-image alignment score. Experimental results show that proposed metric shows significantly higher correlation with human subject ratings over traditional CLIP, BLIP based image-text matching scores. Finally, we propose a simple iterative refinement approach which uses the decompositional-alignment scores as feedback to gradually improve the quality of the generated images. Despite its simplicity, we find that the proposed approach is able to surpass previous state-of-the-art on text-to-image alignment accuracy while on average using only marginally higher inference times.
We hope that our research can open new avenues for robust deployment of text-to-image models for practical applications.
unsrt
|
http://arxiv.org/abs/2307.04938v2 | 20230710232809 | Periodicity staircase in a Fe/Gd magnetic thin film | [
"Arnab Singh",
"Junli Li",
"Sergio A. Montoya",
"Sophie Morley",
"Peter Fischer",
"Steve D. Kevan",
"Eric E. Fullerton",
"Dao-Xin Yao",
"Trinanjan Datta",
"Sujoy Roy"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
These two authors contributed equally.
Materials Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
These two authors contributed equally.
State Key Laboratory of Optoelectronic Materials and Technologies, Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, Center for Neutron Science and Technology, School of Physics, Sun Yat-Sen University, Guangzhou 510275, China
Center for Magnetic Recording Research, University of California San Diego, La Jolla Ca, USA
Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
Materials Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
Department of Physics, University of California Santa Cruz, Santa Cruz CA, USA
Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
Center for Magnetic Recording Research, University of California San Diego, La Jolla Ca, USA
[Corresponding author:][email protected]
State Key Laboratory of Optoelectronic Materials and Technologies, Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, Center for Neutron Science and Technology, School of Physics, Sun Yat-Sen University, Guangzhou 510275, China
International Quantum Academy, Shenzhen 518048, China
[Corresponding author:][email protected]
Department of Physics and Biophysics, Augusta University, 1120 15th Street, Augusta, Georgia 30912, USA
Kavli Institute for Theoretical Physics, University of California, Santa Barbara, California 93106, USA
[Corresponding author:][email protected]
Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
Presence of multiple competing periodicities may result in a system to go through states with modulated periodicities, an example of which is the self-similar staircase-like structure called the Devil's staircase. Herein we report on a novel staircase structure of domain periodicity in an amorphous and achiral Fe/Gd magnetic thin film wherein the reciprocal space wavevector Q due to the ordered stripe domains does not evolve continuously, rather exhibits a staircase structure. Resonant X-ray scattering experiments
show jumps in the periodicity of the stripe domains as a function of an external magnetic field. When resolved in components, the step change along Q_x was found to be an integral multiple of a minimum step height of 7 nm, which resembles closely to the exchange length of the system. Modeling the magnetic texture in the Fe/Gd thin film as an achiral spin arrangement, we have been able to reproduce the steps in the magnetization using a Landau-Lifshitz spin dynamics calculation. Our results indicate that anisotropy and not the dipolar interaction is the dominant cause for the staircase pattern, thereby revealing the effect of achiral magnetism.
Periodicity staircase in a Fe/Gd magnetic thin film
Sujoy Roy
October 2023
===================================================
Introduction.
Appearance of staircase-like structure is a fascinating phenomenon that is observed in a variety of condensed matter systems. In 2D electron gas, quantized conductance is manifested as step feature in Hall effect measurements <cit.>. In quantum materials, interplay of competing interactions with multiple periodicities in a system can give rise to a ground state whose length scales are defined by the modulation of the original periodicities. Example of such modulated periodicities include commensurate and incommensurate phases, such as density waves in solids <cit.>, stripes and charge density waves in cuprate superconductors <cit.>, charge ordered state in manganites <cit.>, and helical spin structure in magnetic systems <cit.>. A well known staircase structure is the Devil's staircase which appears when a system goes through numerous phase-locked modulated periodicities <cit.>. Devil's staircase have been observed in magnetic systems <cit.>, liquid crystals <cit.> and in ferroelectrics <cit.>. Apart from fundamental science the staircase structures have potential applications such as in metrology, sensing devices etc <cit.>.
Interesting staircase structure in domain size and in magnetoresistance have been observed in Dzyaloshinskii-Moriya interaction (DMI) based solitonic system <cit.> Competition between symmetric exchange interaction and the antisymmetric Dzyaloshinskii-Moriya interaction (DMI) can give rise to interesting magnetic phases such as helix, stripes and skyrmion phases appear <cit.>. DMI based chiral magnetic order in helimagnet is called Dzyaloshinskii type helimagnet structure, while a helical magnetic order due to competition between ferromagnetic and antiferromagnetic exchange interaction is known as helimagnetic structure of the Yoshimori type <cit.>. The chiral magnetic structures in a helimagnet exhibits solitons that can be manipulated by an external magnetic field <cit.>. More specifically, the soliton periodicity changes in a step wise manner which is attributed to the discrete changes in the soliton number because of confinement at the grain boundaries <cit.>. Field evolution of confined helicoids have also been shown to occur via discrete steps in helical magnet MnSi <cit.>. The thin film structure of MnSi accommodates a finite number of turns and the jumps are explained due to annihilation of individual turns of the helicoid.
In this article we report on the appearance of staircase structure of the scattering wave vector Q due to the ordered stripe domains in an amorphous Fe/Gd thin film. In contrast to the single crystals with DMI as described in the previous paragraph, the amorphous Fe/Gd thin film is a perpendicular magnetic anisotropy (PMA) system with dominant dipolar interactions and negligible DMI <cit.>. The presence of dipolar interactions can support an achiral phase which in the stripe phase results in the spins to reverse its direction of orientation twice within a period, resulting in no net chirality. We performed resonant coherent soft X-ray scattering to study variations of aligned stripe-domains periodicity. Depending on the applied field condition it is possible to obtain a skyrmion lattice phase that consists of equal number of skyrmions with opposite chiralites (+1 and -1) <cit.>. We observed that the scattering wave vector Q changes in steps with no well defined step height and width. However, when Q is resolved into components Q_x and Q_y, the steps along Q_x, were found to be in integer multiple of 7 nm, which is close to the exchange length of the system. At higher temperature the steps were smeared due to thermal fluctuations.
Our X-ray scattering studies have been complemented by spin dynamics calculations that take into account the achiral nature of the system. We have simulated an experimentally observed (non-equilibrium) process where a global versus local domain dynamics delicately balances each other. On one hand we have the divergence in the total periodicity of the stripes with increasing magnetic field. On the other hand this is being counter balanced by the local minority stripe width which cannot fall below a certain size. Thus, instead of a bulk macroscopic motion of domain walls over the entire sample, these competing tendencies cause the local forces and energetics experienced by the minority domains to locally annihilate some of these half-periods. This in turn leads to (as observed experimentally and verified theoretically) a local readjustment of the domain sizes. By defining two length scales related to global and local achirality, we have been able to theoretically generate steps as a function of applied magnetic field and show the important role that anisotropy plays in generating the steps in these systems. We have developed a theoretical model for the appearance of steps using exchange, dipole and anisotropy. Our calculations indicate that the origin of the steps lie in the anisotropy term. Even if exchange and dipole interactions are present, absence of anisotropy does not produce steps. Thus, although the appearance of steps do look similar in single crystal DMI material and amorphous Fe/Gd sample, the physical origin of the steps in the two systems are different.
Results.
Resonant Scattering due to Stripes
The scattering geometry for the experimental set-up is shown in Fig.<ref>(a). X-ray beam whose energy is tuned to the Fe L_3 edge is incident normally on the sample. A pinhole was placed on the beam path upstream 5 mm from the sample to establish transverse coherence of the beam. In this geometry the X-ray photons are sensitive to the spins that are aligned along the beam direction. The scattering pattern was collected on a charge coupled device camera (CCD) placed about 0.5 m away downstream. Resonant X-ray scattering measurements are sensitive to static magnetic structure (S(q)) and spatial correlation length (ξ_s). From the position and intensity of the Bragg peaks it is possible to extract information about the periodicity and strength of the magnetic order. Fig.<ref>(b) shows the full field X-ray microscope images (top panel) and X-ray resonant scattering pattern (bottom panel) of the sample. We observed three distinct magnetic phases, namely, disordered stripe, ordered stripe and skyrmions, which can be obtained by either varying the temperature or applied magnetic field. The X-ray real space images were obtained by varying the applied magnetic field at 300 K, while the resonant X-ray scattering data was measured from LN_2 temperatures to room temperature as a function of the applied magnetic field. In the ordered stripe phase (T = 239K) the domain periodicity (2π/Q) at remanence is (119 ± 5) nm. The stripe pattern persists as the field is increased from zero till around 170 mT, when new peaks in a distorted hexagonal pattern start to appear indicating a transition to the skyrmion phase. These observations are consistent with previous reports <cit.>.
Fig.<ref>(c) shows the behavior of the integrated intensity of 1^st and 2^nd order diffraction peak from the ordered stripe domains. As the applied magnetic field is increased the 1^st order peak is maximum and the 2^nd order is minimum. This is because of the equal width of up and down domains. Applied out-of-plane magnetic fields break this symmetry, causing even order diffraction peaks to appear. Around 170 mT, intensity of both peaks start to diminish and eventually new peaks in the form of hexagonal pattern appear (see Fig.<ref>(b), bottom panel). It is interesting to note that in the hexagonal phase we observe two relatively strong intensity spots along the same direction as the stripes which would indicate that somehow the original direction of the stripes is retained even in the hexagonal phase.
Staircase Structure of Q-vector
The evolution of the stripe-diffraction spot in Q-space is shown in Fig.<ref>(a) as a function of the applied field at T = 230 K. At the start of the field cycle, the momentum transfer vector is Q_1 ( = 0.052 nm^-1). As the field increases, the magnetization increases and the size of the favorable domains (along the field directions) also increases leading to an increased domain periodicity resulting in a smaller Q-value. Interestingly, we observed that the Q-value corresponding to the magnetic Bragg peak decreases in discrete steps as a function of applied magnetic field giving rise to a staircase-like structure.
The evolution of domain periodicity happens in several steps that involve sudden jumps and appearance of a modulated periodicities. We find that along with the main magnetic Bragg peak, a much weaker satellite peak develop at a smaller Q-value, and both the peaks evolve in an interesting way as the field is changed. The increase in field leads to first the appearance of an initially weaker intensity satellite peak at Q_2 (at a smaller value than zero field Bragg peak at Q_1). With further increase in the field, the main Bragg peak (Q_1) suddenly merges with Q_2 giving rise to a step-like feature. Since the position and intensity of the Bragg peak gives the periodicity of the stripe domains and strength of domain scattering, we can conclude that the number of domains with periodicity P_1= 2π/Q_1 decreases with increase in field while the number of domains of periodicity P_2 = 2π/Q_2 starts to increase and finally all the domains suddenly transform to the periodicity P_2. This sequence of events, changing Q from Q_1 to Q_7 with a similar mechanism of peak shifts (Q_1→ Q_2; Q_3→Q_4; Q_5→Q_6) was observed throughout the stripe phase (see Fig. <ref>(a)). In some cases, a direct change in Q-values corresponding to the Bragg peaks without any satellite (Q_2 →Q_3; Q_4→Q_5) was also observed.
In Fig.<ref>(b) we convert the wavevector into real space periodicity (2π/Q) and plot it as a function of applied field at different temperatures. At higher temperatures the total number of steps increase which results in the appearance of the first step at much lower fields for higher temperatures than the lower ones. The plot of the correlation values of the stripe-diffraction spot at different fields with respect to the one at remanence for increasing magnetic fields is shown in Fig.<ref>(c). Any subtle changes in the speckle pattern between two frames taken at 0 mT and H mT will result in a value of the correlation coefficient (CC) which is defined by
CC=∑_m∑_n(A_mn-A̅)(B_mn-B̅)/√((∑_m∑_n(A_mn-A̅)^2))√((∑_m∑_n(B_mn-B̅)^2)),
where A and B corresponds to the two images taken at two different field values. A_mn denotes the intensity value of the pixel position at m^th row and n^th column of the 2D scattering image. A̅ is the mean value of the 2D image. If CC=1 then the two images are perfectly correlated, CC=0 means completely de-correlated and CC values lying between 0 and 1 means partially correlated. Thus the variation of the correlation-coefficient can be attributed in the real space as either change in magnetization or density or periodicity of the stripes or any combination of these factors with applied field. In Fig.<ref>(c) the CC is calculated between the scattering image taken at remanence (zero field) and another image taken at higher field. In this way the CC plot in Fig.<ref>(c) is generated with increasing field with respect to zero field. The correlation coefficient also exhibits steps like behaviour (blue color line). As a measure of control parameter we also calculated correlation coefficients for the Airy pattern, which remains fairly close to unity at all the fields.
Resolving staircase along Q_x and Q_y direction
A typical diffraction pattern containing only the centro-symmetric first order peaks is represented in Fig.<ref>(a) along with their in-plane Q-vectors. The enlarged image of the diffraction spot in Fig.<ref>(b) exhibit modulation with speckles indicative of heterogeneity in the magnetic phase. The diffraction spots appears at about 45^∘ to the beam propagation direction (see Fig.<ref>(a)), meaning domains are oriented 45^∘ to the X-ray propagation direction. This is due to the presence of a small in-plane field. We resolved the resultant Q-vector along Q_x and Q_y components, to get information about change in the stripe periodicity along real space X and Y direction thereby obtaining real space value L_x and L_y as shown in the schematic representation in Fig.<ref>(c). We find that the steps along L_x are significantly distinct compared to that in the L_y direction (see Fig.<ref>(d and e)).
Interestingly, we found that the steps along L_x change in multiples of 7 nm. That is, the minimum change in periodicity along L_x is 7 nm. No such relationship was found in the L_y direction. The magnitude of the change in L_y is small and random compared to L_x (Fig.<ref> (d,e)). A schematic of a possible stripe domain arrangement is shown in Fig . <ref>(c). The blue (red) domains are majority (minority) domains. The stripes are slanted with respect to the applied field direction along z (There is a small in-plane field along the x-direction). We know from the experimental result that as the applied field is increased, the Q-vector of the magnetic Bragg peak moves to a lower value, but maintains its orientation of 45^∘ with respect to beam direction. This indicates that the minority domain shrinks but the overall the stripe domain maintains the slant. Steps changes in multiple of 7 nm along L_x then means that periodicity changes perpendicular to the beam direction but along the small-in plane field direction take place in multiple of 7 nm. Interestingly, this value matches with the exchange length (L_ex) of Fe/Gd thin film. One of the ways to think about this behavior is that as the minority domains shrink, there is a minimum distance between domain walls after which there cannot be smooth deformation of the spin texture. In the theory section we will show that indeed by defining a term that signifies ratio of spin kink to the spin chain, it is possible to predict jumps. At higher temperatures we observed an increase in the number of steps in the average-periodicity curves (see Fig.<ref>(f and g) as a function of applied OOP field. This is due to fact that thermal fluctuations aids in faster transition from one step to the other as a result we obtain more number of steps at 236 K than 85 K even though the field range over which such steps occur is much higher at lower temperatures.
Existence of steps in solitonic systems with DMI has been observed experimentally and explained theoretically <cit.>. The presence of DMI introduces a topologically protected kink in the spin texture. The topological protection of the kink means that there is an energy cost to kink-annihilation. Different topological sectors have different energy which is the reason for step-like features. In contrast, in Fe/Gd thin films the dominant interactions are exchange, dipole, and anisotropy. This supports an achiral magnetic structure. So far there has being no theoretical studies of the step-like behaviour on dipole interaction dominant achiral spin-structures in an amorphous system. In the theoretical model presented in the next section, we have mimicked the experimental conditions by investigating a one-dimensional dipolar mediated spin chain which is achiral in nature. We have numerically solved the Landau Lifshitz (LL) equation of motion to understand the magnetization dynamics observed in the Fe/Gd thin film experiment. Based on our calculations we show that the origin of the step like behaviour under the application of an external OOP magnetic field could be explained by the spin dynamic behavior of an achiral spin chain.
Model and theory.
The spin kinks caused by long range dipolar interaction in the Fe/Gd thin film can be classified by a number n. In Fig. <ref> we show the arrangement of spins in a finite-size chain under zero applied magnetic field with fixed boundary condition on both ends. Both the local achiral structure and global achiral structure (describing the Fe/Gd thin film) are shown for comparison and context.
We consider a N-site 1D chain where spins interact with exchange interaction, dipolar interaction, PMA, and the in- and out- of plane magnetic field. The spin on each site is parameterized as
S_i=(sinθ_icosφ_i,sinθ_isinφ_i,cosθ_i),
where the site spin angle φ_i=2π ni/𝒩 and θ_i=π/2. Here i=0,1,2,...,𝒩 where N = 𝒩 + 1. The kink sectors are classified by n which indicate the number of domains existing in the chain. The Hamiltonian for our Fe/Gd thin film is
H=H_J+H_D+H_K+H_h,
where the meaning and expression of each term is given by
H_J=-J∑_i ∈ 𝒩S_i·S_i+1 (exchange),
H_D=D∑_i,j ∈ scS_i·S_jΠ_ij (dipolar interaction),
H_K=-K_U∑_i(S_i·x)^2 (anisotropy),
H_h=-gμ_BH_x∑_iS^x_i-gμ_BH_y∑_iS^y_i (magnetic field).
In the above i either denotes the lattice site in the 1D chain or the location of a spin site inside a supercell (sc). The exchange interaction strength is given by J, the dipolar interaction coupling by D, the anisotropy by K_U, and the in- and out- of plane magnetic field by H_x and H_y, respectively. The symbol g denotes the gyromagnetic ratio and the μ_B is the Bohr magneton. The Π_ij in the dipolar interaction term is the Ewald coefficient which captures the long-range nature of the dipolar interaction. Using the angular representation of the spin S_i we can write the total energy as
H/JS^2 =-∑_i ∈ Ncos(φ_i+1-φ_i)+J_d∑_i,j ∈ scΠ_ijcos(φ_i-φ_j)
-K∑_i cos^2φ_i-h_x∑_icosφ_i-h_y∑_isinφ_i,
where we have now introduced the scaled variables J_d=D/J, K=K_U/J, h_x=gμ_BH_x/J and h_y=gμ_BH_y/J. In all our figures we will report the scaled fields in milli-units, that is, h_x = 1 stands for 10^-3 scaled field units.
We implement the local achiral spin structure shown in Fig. <ref>(a) to perform the LL simulation. To mimic the finite size of the experimental sample and to allow for the domains to grow and collapse as observed experimentally, we utilized an embedding trick to simulate the LL equations-of-motion (EOM). To capture experimentally realistic sample conditions, from a computational perspective, we introduced the concept of a local coefficient T. From a physical perspective, T represents the ratio of the length of the achiral structure (which contains the twist sectors solely) over the length of the 1D chain. Thus, the achiral structure is embedded within a global ferromagnetic background spin texture. As we discuss later, the jumps occur due to the rearrangement of domain walls. The computational embedding trick allows us to capture the spontaneous rearrangement of the twist sectors in the chain configuration, thereby simulating the growth and collapse of the achiral domain walls. Our numerical simulations indicate that the eventual fate of the twist sectors and subsequent realization of jumps (as observed experimentally) is a subtle balance between J_d, K, and N. We compute the minimum energy E_min using Eq. (<ref>). The magnetization M is calculated using
M=1/N∑_i=0^𝒩cosφ_i.
We present the energy and corresponding magnetization response of the local achiral state in Fig. <ref>. When the anisotropy is absent, we observe that the energy is degenerate for different twist sectors and no jumps are created by enhancing the dipolar interaction (see Fig. <ref>(a)-(c)). Moreover, it indicates that larger dipolar parameters induce a downshift in energy with no visible effects on the magnetization behavior in the local achiral state. In the presence of anisotropy, we keep the dipolar interaction constant and increase the K parameter as shown in Fig. <ref>(c)-(e). We compute the LL dynamics on a chain of local achiral state with different anisotropy parameters. We find that upon enhancing anisotropy in the presence of a magnetic field, the energy degeneracy of the different twist sectors is broken with a simple upshift. With a relatively small T=1/4 and a strong enough anisotropy K=0.2, we observed jumps in both energy and magnetization in response to magnetic field as shown in Fig. <ref>(e).
In Fig. <ref>(f)-(j) we show our calculations of energy and magnetization response as T is varied. With decreasing T, jumps begin to happen in energy response with higher twist sector. When T>1/2, jumps happen in energy curves with n=6 as shown in Fig. <ref>(f)-(h). However, jumps happen in energy curves with smaller twist sectors n and lower magnetic field intensity h_x. In both Fig. <ref>(i) and Fig. <ref>(j), jumps happen when twist sector is n⩾ 4. And the critical magnetic field intensity for the first jumps to happened decreases as T decreases. It is found that energy response is more powerful to show the disappearing of kinks while the jumps in magnetization response might caused by the position shifting of the kinks.
We considered a chain with larger number of sites. When the number of sites is N=432 and the dipolar parameter J_d=0.00916, jumps can be observed in energy curve with twist sector n=4 and local coefficient T=1/4. However, no jumps can be observed with T=1/3 and 1/2 (plots not shown). This behavior can also be seen in a system with N=864. The result that the decreasing T contribute to the jumps, is also consistent with the N=216 system. Moreover, when the J_d increases, more kinks are able to establish and more jumps are observed. Thus, we draw a conclusion that not just the declining local coefficient T, but also the the rising dipolar parameter J_d results in the jumps happening in energy curve with smaller twist sector n and weaker magnetic field h_x.
There are no jumps in energy and magnetization response which are caused by kinks disappearing in the global achiral state (see Fig. <ref>(b) for global achiral state). The false jumps and oscillations in energy and magnetization for global achiral state are contributions from the position shifting of kinks. Hence, the 1D model in local achiral state is more capable to explain the origin of the jumps that happen in REXS experiment compared to the one in global achiral state.
Discussion.
In this work we have shown experimentally that in a amorphous and achiral Fe/Gd magnetic thin film that exhibits aligned stripes, the domain periodicity changes in steps because of abrupt disappearance of domains. This result is interesting in itself because similar to DMI based solitonic system, exchange-dipole mediated Fe/Gd thin film system also shows similar step-like behavior even though global chirality is absent in the system. Since the presence of DMI can be ruled out in the Fe/Gd thin film <cit.>, there is no inherent topological protection for the stabilized magnetic structure <cit.>. Thus, the achiral nature of the system prevents it from generating topologically stable spin twist sectors. In this sense, the spin twists that are formed due to the competition between exchange and dipolar interaction should be smoothly transformed to the ferromagnetic ground state by any finite deformation.
The existence of steps, as observed in REXS experiments, indicates that along with global achirality there must be local structures with spin-twist characteristics. Intuitively, due to the achiral spin texture of the domains, as magnetic field is increased, the minority domain starts to shrink, resulting in two "like-domain" to come closer. The minimum distance between the two domains is guided by two spin-kinks on either side which should be equivalent to length of two domain walls. Using the well known formula l_w=√(J/K), where l_w is the domain wall width, the domain wall width for Fe/Gd comes out ≈ 3.2 nm, twice of which is 6.4 nm, which is in close agreement to the experimental value of 7 nm. Thus the minimum distance between the two like-domains comes out to be equivalent to the exchange length (L_ex) of the system from our experimental study. The above explanation also points to the existence of a "global" and "local" length scales in the system which will give rise to two energy scales. It is these competing energy scales that give rise to steps. Our system is reminiscent of the case of modulated periodicities.
We take the achiral nature of the stripe spin structure as an important point in our theoretical development and show that the magnetization steps can indeed be observed in an achiral magnet. The variations and interplay of the length scales is captured in the parameter T. Analysis of the energy expression with different values of T suggests that in an achiral spin arrangement staircase structure can be observed only under certain specific ratios of 1D spin to spin-twist length scale. Although simplistic, our LL calculations using local achiral spin structure shown in Fig. <ref>(a) is able to capture the essential feature that the system has jumps in response to an external magnetic field. The jumps happen only when the anisotropy is presented. Absence of anisotropy leads to a degeneracy of energy response for different twist sectors, meaning absence of jumps in the system. Our study provides evidence and further impetus to study an achiral magnetic texture, both from an experimental and theoretical viewpoint.
Acknowledgements.
J. L. and D. X. Yao are supported by NKRDPC-2022YFA1402802, NKRDPC-2018YFA0306001, NSFC-92165204, NSFC-11974432, and Shenzhen International Quantum Academy. This work in the USA was primarily supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract no. DE-AC02-05-CH11231 within the Nonequilibrium Magnetic Materials Program (MSMAG). Work at the ALS, LBNL was supported by the Director, Office of Science, Office of Basic Energy Sciences, of the US Department of Energy (Contract no. DE AC02-05CH11231). The research at UCSD was supported by the National Science Foundation, Division of Materials Research (Award #: 2105400). T. D. acknowledges hospitality of KITP. A part of this research was completed at KITP and was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. T.D. acknowledges helpful and insightful discussions on domain dynamics with Ulrich Rößler.
Author contributions. S.R. and A.S. conceived the experiment. A.S. and S.R. performed X-ray experiments. A.S., S.M. S.R. S.D.K. and P.F., analyzed the data and discussed experimental interpretation. S.A.M. and E.E.F. synthesized samples and performed magnetic characterization. The theory was conceived by T.D., J. L., and D. X.Y. J. L. performed the calculations. T.D and D.X.Y. checked the calculations. All authors contributed to the discussion and writing of the manuscript.
Methods.
Experimental details.
The coherent X-ray magnetic scattering measurements were performed at beamline 12.0.2.2 of the Advanced Light Source, LBNL. The incident beam was tuned to the Fe L_3 edge (707 eV). Transverse coherence of the X-ray beam was established by inserting a 10 µm pinhole in the beampath before the sample. The scattering experiment was done in the transmission geometry at temperatures ranging from 40 K to 300 K as a function the OOP magnetic field from 0 mT to 500 mT. (Fig.<ref>(a)). The sample was subjected to the following initial magnetic field protocol. First the field was raised to 500 mT, then lowered to -500 mT and finally to zero before taking the measurements. The field ramp rate for the first two legs is 13 mT/sec, while the final drop of field from -500 mT to 0 mT took place at a rate of 380 mT/sec. We start our measurement at this zero-field condition and proceed to measure the diffraction signal as a function of applied magnetic field at a constant rate of 1.575 mT/s. A Charge Coupled Device (CCD) camera placed at about 0.5 m downstream of the sample was used to record the scattered intensity patterns.
Theoretical method.
Ewald method. In the Hamiltonian calculation, Ewald summation is applied, which is given by
Π_ij =√(2/π)1/3σ^3∑_ne^-|r_ij- 𝗇 L|^3/2σ^2+4π/Ω∑_k≠ 0e^-σ^2k^2/2cos(kr_ij)
-√(2/π)1/3σ^3δ_ij,
where r_ij represents the distance between two spin sites, L is the size of the supercell, n is the supercell label, σ is the real-space cut-off, k is the momentum space label, and Ω is the volume of the supercell which in our case is equal to L. The Ewald parameter will be redefined as Π_ij =Π_|i-j|≡Π_m, where the symbol m tracks the number of Ewald parameters for the specific supercell size choice. The values of Π_m are shown in Table. <ref>.
Landau-Lifshitz (LL) equation of motion. We perform a LL EOM spin dynamics calculation on Eq. <ref>. We obtained an iterative equation which can be used to calculate the angle φ_i of each spin on the chain. Based on our computations, we are able to generate a stabilized spin order along the chain. Next, we computed the twist angle Δφ of the ground state in the absence of an external magnetic field using the energy minimization condition for the supercell given by the expression
E/JS^2=-cosΔφ+J_d∑_m=1^L-1(L-m)cos(mΔφ)Π_m.
The angle φ_i was analyzed to obtain the relationship between the range of J_d and the number of kinks for a given lattice size of site N. The relationship between the number of sites N, dipolar parameter J_d and the maximal sector n_max is shown in Table. <ref>. Note that to perform the calculation, one needs to choose a supercell size that stabilizes the ground state and ensures that there will be minimal to no numerical oscillations in the computed result due to convergence issues. We found that L=6 is the optimal supercell size which yields numerically stable results for our LL analysis. Using the numerically stable data, we computed the minimum energy E_min (scaled relative to J S^2) and magnetization M (scaled relative to S).
To compare our numerical results with the experimental setup of the multilayer Fe-Gd system, we need to mimic the experimental conditions. Therefore, all the results are calculated by applying a tiny in-plane field h_y.
The two angular variable EOMs are given by (for ħ=1)
Ssinθ_i∂_tθ_i=∂ H/∂φ_i, Ssinθ_i∂_tφ_i=-∂ H/∂θ_i,
where only the first equation is required because the angle θ is held constant. Using Eq. (<ref>) we can obtain the following expressions
0=1/2[sin(φ_i-φ_i-1)-sin(φ_i+1-φ_i)]
+J_d∑_j ∈ sc[sin(φ_i+|i-j|-φ_i)-sin(φ_i-φ_i-|i-j|)]Π_ij
+2Ksinφ_i cosφ_i+h_xsinφ_i-h_ycosφ_i.
The above can be split further into a form convenient for a numerical iterative self-consistent approach to solve for the angle ϕ_i. Hence, we write
A_i =1/2(sinφ_i+1+sinφ_i-1)-J_d∑_j ∈ sc(sinφ_i+|i-j|
+sinφ_i-|i-j|)Π_ij-Ksinφ_i+h_y
B_i =1/2(cosφ_i+1+cosφ_i-1)-J_d∑_j ∈ sc(cosφ_i+|i-j|
+cosφ_i-|i-j|)Π_ij+Kcosφ_i+h_x,
with the site angle φ_i is defined as
sinφ_i=A_i/√(A_i^2+B_i^2), cosφ_i=B_i/√(A_i^2+B_i^2).
The LLG equation is solved with the boundary condition φ_0=0 and φ_𝒩=2π n.
apsrev4-1
|
http://arxiv.org/abs/2307.04258v1 | 20230709200728 | Classicality from Quantum Stochastic Processes | [
"Esteban Martínez-Vargas"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Física Teòrica: Informació i Fenòmens Quàntics, Departament de Física, Universitat Autònoma de Barcelona, 08193 Bellatera (Barcelona) Spain
We develop a theory of classicality from quantum systems. This theory stems
from the study of classical and quantum stationary stochastic processes.
The stochastic processes are characterized by polyhedral (classical) and
semidefinite representative (quantum) cones.
Based on a previous result <cit.> we expand the study of
fixed points from quantum channels. We give a semidefinite program that
characterizes a quantum channel separating into a core and a part that
decays with many iterations. In general, the solution is non-separable in the
space it is defined. We present a characterization of channels in terms of
their fixed points for the separable case. A quantum simulation of a polyhedral
cone can then be constructed.
Classicality from Quantum Stochastic Processes
Esteban Martínez Vargas
August 12, 2023
==============================================
When describing the cause of a phenomenon a good practice is to think in the
simplest possible explanation, this philosophical principle is called
Ockham's razor <cit.>.
Stochastic processes is a very
general framework to describe a wide variety of systems, in biology, economics,
chemistry, physics, etc. <cit.>.
Specifically, the modeling through Hidden Markov Models has been widely studied <cit.>.
These kinds of models arise when we have a stationary structure.
These objects have
been widely studied classically and in quantum systems <cit.>.
Quantum stochastic processes are needed because quantum systems are always open to complex environments that
affect the evolution of a system and for foundational aspects of quantum mechanics <cit.>.
Even though intuition indicates that the simplest and most practical description would
correspond to classical information theory, several results show that
there is an advantageous simplicity when using quantum systems to describe stochastic
processes even when having classical systems <cit.>.
This means that quantum mechanics would be a natural language for stochastic processes.
Nevertheless, this perspective would imply a paradoxical worldview: if quantum mechanics
is the most natural way to describe stochastic processes, which is a very general tool
to describe aspects of the world (classical and quantum) then why does our world seem classical?
Quantum stochastic processes thus remit us to a question that was
raised since the inception of quantum mechanics a hundred years ago, that is
the passage from quantum to classical dynamics is usually expressed as the
correspondence principle <cit.>.
There is a large amount of work in this respect, specifically in the area of einselection
and quantum Darwinism
<cit.>.
Existing an advantage in terms of information one would ask why would classicality even exist.
Here we aim to study this topic using the formalism of quantum stochastic processes.
The approach that the theory of einselection assumes is that there is a constant
feature of a system, which is its environment (or system plus environment). It thus aims to study the features of the
system which are resistant to decoherence. Classicality from a quantum perspective
is thus defined as those features which persist in time.
In this line of thought, Hidden Markov Models arise when the notion of stationarity
is relevant, therefore, such stationarity of a stochastic process remits us to think in a persistent structure that produces it.
However, there exist stationary processes produced from quantum sources, as
characterized by Monràs and Winter in their ('O Scarrafone) theorem <cit.>.
There, they characterize the most general stationary stochastic process produced
by a quantum source. Therefore, stationarity in itself is not sufficient for
the notion of classicality. In einselection a central aspect is also the objectivity
of a specific basis, there are einselected states. The fact that these states are
unchanged by the dynamics is also central to the objectivity of the system.
Therefore, we will understand classicality in a quantum stochastic process if it
fulfills two conditions:
* Persistence in time.
* Objectivity of its generating states.
In this paper, we explore a possible mechanism for the persistence in
time of an objective set of quantum states that gives place to a Markov process.
We consider discrete uses of a quantum channel and the objective set will be made
of the fixed points of the channel.
Observe that quantum channels have at
least one fixed point <cit.>.
We here consider channels with multiple fixed points. A finite collection of such
points which are vectors form what is known as a polyhedral cone. Following a theorem
by Dharmadhikari <cit.> this is a necessary and sufficient
condition to have a stationary Markov process. We thus restrict Monràs and Winter
conditions to those asked by Dharmadhikari.
Although the fixed points of quantum channels have been studied in the literature <cit.>,
it is a cumbersome topic as a part of some theorems, given a channel there is no general
way of finding its fixed points and
numerical simulations are almost always the norm <cit.>.
Here we give a characterization of quantum channels inversely:
given a finite number of states explore all the quantum channels that have them as
fixed points. We extend the results of <cit.> to consider multiple fixed point
channels and give an example of the power of this approach.
First, we introduce Dharmadhikari's and Mondràs and Winter theorems. Then explain the problem of classicality from the quantum perspective as a “cone reduction” problem.
Finally, we introduce our study of multiple fixed point quantum channels and apply it to
an example. We finish with a discussion.
§ QUASI-REALIZATIONS OF STOCHASTIC PROCESSES
To study the different dynamics, classical and quantum, a general framework of
stationary stochastic processes will be introduced: the theory of quasi-realizations.
From an abstract point of view, a stochastic process is given by an alphabet of symbols
ℳ with size |ℳ|=m and we denote ℳ^l the set of words
of length l. We define
ℳ^* = ∪_l≥0ℳ^l.
We can obtain the probability of a specific word 𝐮=(u_1,u_2,…,u_l)∈ℳ^l,
p(𝐮)=p(Y_1=u_1,Y_2=u_2,…,Y_n=u_n).
It will be relevant to study stationary distributions as they describe the stochastic process
asymptotically.
We would like to infer what is the inner mechanism that
gives rise to a stochastic process Y. A very widely known kind of matrices that produce a stochastic process are the stochastic matrices, which are non-negative
matrices whose rows sum 1. However, in general, the hidden mechanism of a
stochastic process need not be described by a stochastic matrix.
To see clearly this affirmation we make the following definition
A quasi-realization of a stochastic process is a quadruple (𝒱,π,D,τ),
where 𝒱 is a vector space, τ∈𝒱, π∈𝒱^*, the dual
space to 𝒱 and
D is a unital representation of ℳ^* over 𝒱,
D^(ε) = 1,
D^(u)D^(v) =D^(uv), ∀ u,v∈ℳ^*.
We will call cause matrices to the matrices
D^c = ∑_u∈ℳD^(u),
where D^(u) was defined above.
Observe that several quasi-realizations can yield the same stochastic process, we will call
equivalent two quasi-realizations that generate the same stochastic process.
For us, the relevant outcome of this definition will be the quantum version, which means, to find
stochastic processes where their cause matrices are not necessarily stochastic matrices.
§.§ Classical cones: Dharmadhikari's theorem
Observe now that being the stochastic matrices a subclass of cause matrices then
we need to find out the conditions when the cause matrices of a quasi-realization
(R^d,π,M,τ) become nonnegative matrices and
M^s = ∑_u∈ℳM^(u),
is stochastic.
This is precisely given by Dharmadhikari's theorem, it gives us the conditions
for having a positive realization, which means, that eq. (<ref>)
is fulfilled, π∈(R^d)^* is a stationary distriburion and τ=(1,1,…,1).
Given a quasi-realization (𝒱,π,D,τ), an equivalent positive
realization exists if and only if there is a convex pointed polyhedral cone
𝒞⊂𝒱 such that
* τ∈𝒞.
* D^(v)(𝒞)⊆𝒞.
* π∈𝒞^*.
With 𝒞^* the dual cone of 𝒞.
We thus need that all the dynamics to be restricted to a polyhedral cone.
§.§ SDR cones: Scarrafone
For quantum systems the type of cause matrices is different. The
characterization of the quasi-realizations, in this case, is related to the
theorem in ref <cit.>.
Given a quasi-realization (𝒱,π,D,τ) an
equivalent, finite-dimensional, unital, completely positive realization
(ℬ(ℋ)^sa,ρ,ε,ℐ) exists if and only
if there is a SDR cone 𝒫⊂𝒱⊗𝒱^*
such that
* τ∈𝒞.
* D^(u)∈𝒫 for all u∈ℳ.
* π∈𝒞^*.
With 𝒞 a SDR cone and 𝒞^* its dual defined in
<cit.>.
§ CLASSICAL DYNAMICS AS A FIXED POINT PROBLEM
Any quantum dynamics is described by an SDR cone. This cone includes an instrument that
maps states from the cone into itself. Theorem (<ref>) offers a description
for general instruments. Observe, however, that for instruments with a constant channel
in all iterations asymptotic behavior of channels becomes relevant.
This situation implies analyzing the channels from the perspective of their fixed points,
as all channels have at least one by theorem (4.24) of <cit.>.
The theory of quantum einselection by Zurek et. al. implies a model of reduction from quantum dynamics
to classical dynamics. It states that the classical behavior of a system is given by certain states
that are selected over time by the interaction Hamiltonian which means, the interaction with
an environment. Observe that this approach implies a continuous evolution of a system. The einselected
states are the ones that remain over time. Another important aspect is that the environment, and thus
the Hamiltonian remains constant over time. A Hamiltonian of changing potential would change itself
and therefore einselection would not be possible.
We here want to study an analogous point of view. The classical dynamics will be given by a
polyhedral cone. The reduction from quantum to classical will be given in terms of fixed points of
a quantum channel. The time evolution will thus be discrete, but always maintaining a channel constant.
We do not require that an environment imposes some evolution but we require constant dynamics.
The classical cone is a result of constant quantum dynamics.
This point of view allows us to extend the einselection process from only a set of pure states to
possibly mixed states. Such states would not necessarily be orthogonal. We will show a decomposition
theorem for channels, to decompose any channel into its fixed point plus a part that goes to zero.
Then we show a bound to obtain this decomposition given a channel. Finally, we explore the decomposition
of multiple fixed points, which would imply a polyhedral cone.
§ CONE REDUCTION PROBLEM
In this section, we develop a general theory to describe the transition between
quantum mechanics and classical mechanics.
Starting from the historical perspective, the theorem (<ref>) by Dharmadhikari finds the
conditions for the existence of equivalent positive realizations given a realization.
We therefore can translate this mathematical statement into a physically relevant
one with the following postulate
Any discrete classical transformation is described by a convex polyhedral cone.
In general, any quantum Markov process is inscribed in a SDR cone. Following the
principles of quantum mechanics and the theorem (<ref>) by Monràs and Winter
it is natural to state the following postulate
Any discrete quantum mechanical transformation is described within a
SDR cone.
To our knowledge quantum mechanics is a fundamental and universal theory <cit.>.
Therefore, any classical dynamics should arise from a SDR cone, therefore the following
postulate arises naturally
Any classical (convex polyhedral) cone is embedded in a larger quantum (SDR) cone.
Notwithstanding, this last postulate thus demands a mathematical treatment that
makes it more concrete. In Fig. <ref> we present a diagram of a quantum
SDR cone containing a classical cone.
The main problem then is how to reduce a SDR cone to a classical polyhedral one.
As mentioned before, a simple reduction mechanism is to suppose an instrument with a constant
channel all the time. Then, because
of theorem (4.24) from <cit.> the channel has at least one
fixed point. The output of the channel is reduced to a single state.
However, a channel can have several fixed points, thus conforming a polyhedral
cone. This motivates the study of channels with multiple fixed points.
§ MULTIPLE FIXED POINT CHANNELS
We first cite a result from <cit.> we have the following
characterization of a channel in terms of its fixed point.
Given a state σ we can describe a trace-preserving separable family of channels with fixed point σ
in terms of its Choi matrix 𝒞 as follows
𝒞=σ⊗(|V_max⟩⟨V_max|)^⊺/λ_max
+B⊗(I-(|V_max⟩⟨V_max|)^⊺/λ_max),
λ_max is the maximum eigenvalue of σ and |V_max⟩ its correspondent eigenvector.
B is a state. This description is valid for ⟨V_max|B|V_max⟩≤λ_max and
any input state ⟨V_max|ρ|V_max⟩≤λ_max.
Analogously to the SDP used in <cit.> to find the above characterization
we have a way to find the channels that have as fixed points some desired
states.
The SDP for finding the channel with minimum trace with two fixed points is the
following
Xmaximize -[X]
subject to _ℋ_2[X(1_ℋ_1⊗σ^⊺_0)]=σ_0
_ℋ_2[X(1_ℋ_1⊗σ^⊺_1)]=σ_1
X≥ 0.
A trace-preserving channel is found as follows, in terms of a state B,
𝒞 = X + B⊗(1-_H_1[X]).
We further require that
[X(1_H_1⊗ B^⊺)]< 1,
so that iterations converge.
We nevertheless, lack a general characterization of the solution X in terms
of the states σ_0 and σ_1 in contrast to the
one-state case. Observe that if {σ_0,σ_1}∈ℋ then
X∈ℋ⊗ℋ and in general X is an entangled operator in that space.
As a special case, we have the following characterization for when X is separable.
A channel with Choi matrix 𝒞 has two fixed points σ_0 and
σ_1 can be written as
𝒞=σ_0⊗Π_0^⊺/[Π_0σ_0]
+σ_1⊗Π_1^⊺/[Π_1σ_1]
+B⊗(I-Π_0^⊺/[Π_0σ_0]-Π_1^⊺/[Π_1σ_1]),
if and only if the states σ_0 and σ_1 can be
unambiguously discriminated. This means, there exist the positive semidefinite operators Π_0 and Π_1
such that [σ_0Π_1]=[σ_1Π_0]=0
and [Π_0σ_0]≠0 and [Π_1σ_1]≠0.
Also we require [BΠ_0]/[Π_0σ_0]+[BΠ_1]/[Π_1σ_1]<1.
Observe that the channel 𝒞
fulfills the constrictions of the SDP (<ref>), specifically,
_H_2[𝒞(1⊗σ_0)]= σ_0 + σ_1[σ_0Π_1]/[Π_1σ_1]-B[σ_0Π_1]/[Π_1σ_1],
which is equal to σ_0 if and only if [σ_0Π_1]=0.
Analogously _H_2[𝒞(1⊗σ_1^⊺)]= σ_1
if and only if [σ_1Π_0]=0.
The generalization to the cases of more fixed points is straightforward, we put
more conditions in the SDP (<ref>), analogously add projectors
in the decomposition (<ref>).
§ SIMULATION OF A POLYHEDRAL CONE
Observe that a classical cone can be easily simulated with an SDR cone.
We start with a state ρ_0, apply a multiple-fixed point channel, end
up in a state σ_i. Then, we can apply a random channel to get
out of the fixed point, and apply the multiple-fixed point channel again
to end up in another state σ_j etc, as depicted below.
ρ_0 →Φ^n[ρ_0]≈σ_i→φ_rand[σ_i]=ρ_0^'
ρ_0^' →Φ^n[ρ_0^']≈σ_j→φ_rand[σ_j]=ρ_0^''
…
Φ is the multiple fixed point channel and φ_rand is a random channel.
§ EXAMPLE
Consider the Hilbert space spanned by two qubits. We can take the Bell basis and label
it as follows
|V_0⟩ =|00⟩+|11⟩/√(2),
|V_1⟩ =|00⟩-|11⟩/√(2),
|V_2⟩ =|01⟩+|10⟩/√(2),
|V_3⟩ =|01⟩-|10⟩/√(2).
We also define the states
|V_1^0⟩ =α_0|V_1⟩+β_0|V_2⟩,
|V_2^0⟩ =δ_0|V_1⟩+ϵ_0|V_2⟩,
|V_1^1⟩ =α_1|V_1⟩+β_1|V_2⟩,
|V_2^1⟩ =δ_1|V_1⟩+ϵ_1|V_2⟩.
Now let us define the mixed states
σ_0 =s_0|V_0⟩⟨V_0|+s_1|V_1^0⟩⟨V_1^0|+s_2|V_2^0⟩⟨V_2^0|.
σ_1 =r_0|V_1⟩⟨V_1|+r_1|V_1^1⟩⟨V_1^1|+r_2|V_2^1⟩⟨V_2^1|.
We can easily verify that ⟨V^1|σ_0|V^1⟩=⟨V^0|σ_1|V^0⟩=0 and therefore
the characterization of fixed points from Eq. (<ref>) is valid.
§ DISCUSSION
We develop here a general model to simulate a polyhedral cone using quantum systems. This
model is very general as it allows mixed states to be the vectors that subtend the cone.
We observe here a passing from a quantum system that has non-classical correlations
to a system that behaves classically, in terms of the stochastic processes they
produce. This is an example of a change of behavior from quantum dynamics into
classical dynamics which is closely related to the theory of einselection and
quantum Darwinism. Here the mechanism is analogous to the case of einselection
because it involves the repetitive action of a quantum channel.
However, the process yields the fixed points of the channel. We describe here
a way to construct quantum channels with desired specific fixed points. It can
be viewed as an engineering of quantum channels with multiple fixed points, which
gives rise to the classical behavior in terms of a polyhedral cone.
Observe that the most general characterization of our method is given by the SDP (<ref>)
which yields an operator X that in general can be entangled in the space it is defined.
This means X∈ℋ⊗ℋ where ℋ is the Hilbert space of
σ_0 and σ_1. However, in general, X is not separable in those subspaces. In theorem
<ref> we explore the separable case. A full
characterization of the solution X in the entangled case is a
matter of future research.
The mechanism that we study here is a specific one, however, the formalism could
be extended to consider other possible mechanisms that reduce the quantum
dynamics of a system into a classical one. This would extend the study of
quantum-to-classical transitions.
|
http://arxiv.org/abs/2307.04421v2 | 20230710085412 | Towards Enabling Cardiac Digital Twins of Myocardial Infarction Using Deep Computational Models for Inverse Inference | [
"Lei Li",
"Julia Camps",
"Zhinuo",
"Wang",
"Abhirup Banerjee",
"Marcel Beetz",
"Blanca Rodriguez",
"Vicente Grau"
] | eess.SP | [
"eess.SP",
"cs.CV",
"eess.IV"
] |
Towards Enabling Cardiac Digital Twins of Myocardial Infarction Using Deep Computational Models for Inverse Inference
Lei Li, Julia Camps, Zhinuo (Jenny) Wang, Abhirup Banerjee, Marcel Beetz, Blanca Rodriguez, and Vicente Grau
Corresponding author: Lei Li (e-mail: [email protected]).
This work was supported by the CompBioMed 2 Centre of Excellence in Computational Biomedicine (European Commission Horizon 2020 research and innovation programme, grant agreement No. 823712).
L. Li was partially supported by the SJTU 2021 Outstanding Doctoral Graduate Development Scholarship.
A. Banerjee is a Royal Society University Research Fellow and is supported by the Royal Society Grant No. URF\R1\221314.
The work of A. Banerjee and V. Grau was partially supported by the British Heart Foundation Project under Grant PG/20/21/35082.
Lei Li, Abhirup Banerjee, Marcel Beetz, and Vicente Grau are with the Department of Engineering Science, University of Oxford, Oxford, UK.
Julia Camps, Zhinuo (Jenny) Wang, and Blanca Rodriguez are with the Department of Computer Science, University of Oxford, Oxford, UK.
Received / Accepted
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Myocardial infarction (MI) demands precise and swift diagnosis.
Cardiac digital twins (CDTs) have the potential to offer individualized evaluation of cardiac function in a non-invasive manner, making them a promising approach for personalized diagnosis and treatment planning of MI.
The inference of accurate myocardial tissue properties is crucial in creating a reliable CDT platform, and particularly in the context of studying MI.
In this work, we investigate the feasibility of inferring myocardial tissue properties from the electrocardiogram (ECG), focusing on the development of a comprehensive CDT platform specifically designed for MI.
The platform integrates multi-modal data, such as cardiac MRI and ECG, to enhance the accuracy and reliability of the inferred tissue properties.
We perform a sensitivity analysis based on computer simulations, systematically exploring the effects of infarct location, size, degree of transmurality, and electrical activity alteration on the simulated QRS complex of ECG, to establish the limits of the approach.
We subsequently propose a deep computational model to infer infarct location and distribution from the simulated QRS.
The in silico experimental results show that our model can effectively capture the complex relationships between the QRS signals and the corresponding infarct regions, with promising potential for clinical application in the future.
The code will be released publicly once the manuscript is accepted for publication.
Cardiac digital twins, myocardial infarction, inverse problem, cardiac MRI, QRS, multi-modal integration.
§ INTRODUCTION
Myocardial infarction (MI) is a major cause of mortality and disability worldwide <cit.>.
Assessment of myocardial viability is essential in the diagnosis and treatment management for patients suffering from MI.
In particular, the location and distribution of myocardial scars provide important information for patient selection and treatment planning.
Late gadolinium enhancement (LGE) magnetic resonance imaging (MRI) has been widely used to characterize myocardial scars <cit.>.
However, the incorporation of LGE into MRI examination prolongs scan times, and has potential side effects <cit.>.
Recent studies have tried to delineate scars using non-enhanced cine MRI, with promising preliminary results <cit.>.
Alternatively, the electrocardiogram (ECG) can be used to reveal abnormalities related in electrophysiology post-MI <cit.>.
For example, ST-segment elevation and T-wave inversion are commonly used indicators of cardiac remodeling associated with different stages of MI <cit.>.
In contrast, QRS patterns have received less attention in the literature, though they also provide valuable information about the extent and location of myocardial damage following an MI <cit.>.
It is still partly unclear how QRS abnormalities reflect MI characteristics, such as location, size, transmural extent, and cardiac electrical activity alterations.
Therefore, a reliable technique to detect and delineate infarct regions combining non-enhanced imaging and QRS data is highly desirable.
Cardiac “digital twin" (CDT) technology can create virtual models of the heart combining cardiac images, ECG, and other subject-specific information <cit.>.
It allows clinicians to visualize and analyze the structure, function, and electrical activity of the heart in real-time, providing valuable insights into the underlying mechanisms of MI <cit.>.
As fig:intro:CDT shows, CDT workflows usually involve two stages, namely anatomical and functional twinnings, which present various challenges to overcome <cit.>.
The anatomical twinning stage involves the segmentation of cardiac images, reconstruction of the 3D geometry of the heart, and the identification and extraction of relevant anatomical structures.
It is complicated by the variability in the heart's anatomy across individuals, as well as by imaging artifacts and noise.
At the functional twinning stage, the main challenge is to solve the inverse problem of electrocardiography, i.e. inferring electrophysiological properties in the myocardium from the ECG.
This is complicated by the limitations of ECG recordings, which are sparse, noisy, and subject to substantial uncertainties.
To solve the inverse problem, state-of-the-art approaches can be coarsely separated into two kinds: deterministic and probabilistic methods <cit.>.
Deterministic approaches in cardiac electrophysiology involve minimizing a cost function that quantifies the discrepancy between the observed data and the model predictions.
For robust inverse, spatial and/ or temporal regularization <cit.> and physics-informed regularization <cit.> have been widely used.
Probabilistic methods rely on Bayesian inference theory and numerical techniques to generate posterior distributions for the model parameters <cit.>.
They can incorporate prior knowledge into the parameter estimation with an uncertainty, which can be used to guide decision-making and assess the robustness of the results <cit.>.
Nevertheless, conventional probabilistic methods are usually computationally expensive, as repeated numerical simulations are required to generate samples for the posterior distribution.
Recently, deep learning based probabilistic methods have emerged as an alternative to conventional methods for modeling complex dynamics of cardiac electrical activity.
They can leverage deep neural networks to approximate the posterior distribution of the model parameters or latent variables, providing faster and more accurate approximations.
For example, Ghimire et al. <cit.> proposed a deep generative model to reconstruct cardiac transmembrane potential from ECG data.
Li et al. <cit.> designed a deep computational model for the inverse inference of ventricular activation properties in a non-invasive and efficient manner.
Xie et al. <cit.> employed a physics-constrained deep learning framework to inversely predict the heart-surface electrical signals from body surface potential maps.
Sahli et al. <cit.> developed physics-information neural networks for the reconstruction of activation maps in cardiac electrophysiology.
Dhamala et al. <cit.> proposed a generative variational autoencoder for parameter estimation of a personalized cardiac model.
In addition to inferring the electrophysiological properties under sinus rhythm, several studies tried to investigate the propagation of cardiac electrical signals under arrhythmias based on deep neural networks.
For example, Meister et al. <cit.> employed graph convolutional neural networks to estimate the depolarization patterns in the myocardium with scars.
Bacoyannis et al. <cit.> reconstructed activation patterns of the myocardium with various local wall thicknesses, as thin walls indicate infarct regions.
However, with regards to different post-MI scenarios, the inverse inference of electrophysiological heterogeneity in the infarct regions has not been fully investigated.
In this work, we develop a deep computational model for the inverse inference of post-MI with different properties, varying the infarct location, size, and transmural extent.
We first conduct a sensitivity analysis to investigate the relationship between QRS abnormalities and infarct characteristics in post-MI.
This analysis provides insights into how variations in QRS signals are associated with specific infarct properties, informing the subsequent inverse inference process.
The framework can efficiently combine the anatomical properties from cine MRI and electrophysiological information from QRS simulated via a biventricular electromechanical model of post-MI.
This study provides an integrated and personalised perspective that incorporates the features from multi-modal data to predict tissue properties of post-MI, enabling the construction of a CDT platform.
To the best of our knowledge, this is the first deep learning based computational model that addresses the inverse inference of MI with different characteristics.
§ METHODOLOGY
§.§ Anatomical Twinning: Mesh Reconstruction
At the anatomical twinning stage, we reconstruct a subject-specific 3D torso-biventricular tetrahedral mesh from multi-view cardiac MRIs <cit.>.
Specifically, for the biventricular reconstruction, we first use a deep learning based ventricle segmentation from long- and short-axis cardiac MRIs and thus obtain sparse 3D contours.
We then perform a misalignment correction based on the intensity and contour information coupled with a statistical shape model, followed by a surface mesh reconstruction and volumetric tetrahedral mesh generation.
We utilize a two-step automated framework for the torso reconstruction, and the locations of the ECG electrodes (I, II, V1-V6, LA, RA, LL, RL) are measured from the personalized 3D torso mesh.
To ensure a symmetric, consistent, and intuitive biventricular representation across various geometries, we project the biventricular mesh into a consistent biventricular coordinates (Cobiveco) system <cit.>.
The Cobiveco system is defined by (tm, ab, rt, tv), which correspond to transmural, apicobasal, rotational, and transventricular coordinates, respectively.
The reader is referred to the anatomical twinning stage of fig:intro:CDT for the illustration of Cobiveco (tv is excluded there).
We represent infarct areas in the myocardium as an ellipse with radii r_tm, r_ab, and r_rt as follows,
(tm_i - tm_0)^2/r_tm^2 + (ab_i - ab_0)^2/r_ab^2 + (rt_i - rt_0)^2/r_rt^2≤ 1,
where (tm_0, ab_0, rt_0) is the center coordinate of the scar region.
We consider different post-MI scenarios, including seven locations, two transmural extents, two different sizes, and two different cardiac electrical activity alterations.
As fig:method:17AHA_MI_location shows, one can define the infarct areas consistently in the 17-segment American Heart Association (AHA) map <cit.>, enabling the study of the effects of MI properties at a population level.
Note that in this study, we only consider the scars in the left ventricle (LV), as the majority of clinically significant myocardial scars present there <cit.>.
The LV region is defined in Cobiveco as tv = 0 ∨ (tv = 1 ∧ rt > 2/3) to include the whole septum.
For the comparison of different infarct sizes and cardiac electrical activity alterations, we only report on lateral MI as an illustrative case.
As tb:method:MI scenario shows, we simulate infarct at seven different locations and one smaller size on lateral MI, each with two levels of transmural extent, and one scenario with a slower CV on transmural large lateral MI, resulting in a total of 17 post-MI scenarios for each subject.
fig:method:MI_examples provides several examples of our experimental scenarios.
§.§ Functional Twinning: Forward Electrophysiological Simulation
At the functional twinning stage, we simulate cardiac electrophysiology via an efficient orthotropic Eikonal model <cit.>, which incorporates a human-based Purkinje system into the formulation of the activation times of root nodes (RN).
The simulation is performed on the Cobiveco mesh, solving:
{ √(∇^T t𝒱^2 ∇ t) = 1,
t(Γ_0) = pk(Γ_0)-min(pk(Γ_0)),
.
where 𝒱 are the orthogonal conduction velocities (CVs) of fibre, sheet (transmural), and sheet-normal directions, t is the time at which the activation wavefront reaches each point in the mesh, Γ_0 is the set of RN locations, and pk is a Purkinje-tree delay function from the His-bundle to every point.
Therefore, the earliest activation time at the RNs is defined as their delay from the His-bundle through the Purkinje tree normalized by the earliest activation, such that the wavefront originates at t = 0 in one of the endocardial RNs.
The QRS can be calculated from the activation time map (ATM) via a pseudo-ECG equation <cit.> for a 1D cable source with constant conductivity at a given electrode location (x',y',z'), as
ϕ_e (x',y',z' ) = a^2 σ_i/4 σ_e∫ - ∇ V_m ·[ ∇1/r] dx dy dz ,
where V_m is the transmembrane potential, ∇ V_m is its spatial gradient, r is the Euclidean distance from a given point (x,y,z) to the electrode location, a is a constant that depends on the fiber radius, and σ_i and σ_e are the intracellular and extracellular conductivities, respectively.
The pseudo-ECG method can efficiently generate normalized ECG signals without significant loss of morphological information compared to the bidomain simulation <cit.>.
In modeling the effects of scars on the QRS, it is essential to consider the electrophysiological properties of the infarct regions, such as the slower CVs <cit.>, which can lead to changes in the timing and amplitude of the ECG waveform and thus manifest as changes in QRS.
Therefore, we vary the CVs of infarct and healthy myocardial areas during QRS simulation (see Sec. <ref>).
As Fig. <ref> shows, the ATM of MI patients presents slower electrical signal propagation compared to that of healthy ones, resulting in corresponding alteration in the simulated QRS morphology.
§.§ Functional Twinning: Inverse Inference of Post-MI Properties
fig:method:computation model provides an overview of the proposed deep computation model, consisting of a dual-branch variational autoencoder (VAE) and an inference model.
The VAE captures both anatomical and electrophysiological features, while the inference model uses the latent space representation to predict scar and border zone location.
fig:method:network depicts the network architecture.
For the geometry reconstruction, we reconstruct coarse and dense point clouds (PCs) to simultaneously learn global shape and local anatomy of the ventricles.
Therefore, the PC reconstruction loss function is defined as follows,
ℒ^rec_PC = ∑_i=1^K(ℒ_i,coarse^chamfer + αℒ_i,dense^chamfer),
where K is the number of classes, α is the weight term between the two PCs, and ℒ^chamfer is the chamfer distance between the input and reconstructed PCs.
To improve the fidelity and resemblance of the reconstructed QR̂S to the original QRS, we minimize their mean-squared error (MSE) and dynamic time warping (DTW) distance <cit.>,
ℒ^rec_QRS = ℒ_MSE(QRS, QR̂S) + ℒ_DTW(QRS, QR̂S).
Finally, the loss function for training the VAE is calculated as,
ℒ_VAE = λ_PCℒ^rec_PC + λ_QRSℒ^rec_QRS + λ_KLℒ^KL,
where λ_PC, λ_QRS, and λ_KL are balancing parameters, and ℒ^KL is the Kullback-Leibler (KL) divergence loss to mitigate the distance between the prior and posterior distributions of the latent space.
For the inference, we predict the infarct location based on the low-dimensional features learned from the VAE.
To alleviate the class-imbalance issue existed in the MI segmentation, we combine the cross-entropy (CE) loss and Dice score loss,
ℒ_seg = ℒ_CE + λ_Diceℒ_Dice,
where λ_Dice is a balancing parameter.
For realistic infarct shape, we further introduce a compactness loss,
ℒ_compact = 1/N^pre∑_i=1^N^pred_i^pre + d_i^gd/d_max^gd,
where N^pre is the total number of predicted MI points, d_i^pre and d_i^gd are the Euclidean distances from each predicted MI point i to the center of predicted and ground truth MI, respectively,
and d_max^gd is the maximum Euclidean distance from ground truth MI points to their center.
We introduce two further constraints, to control infarct size and prevent scar from appearing in the right ventricle (RV), through two additional loss functions:
ℒ_size = N^pre-N^gd/N^gd,
ℒ_spa = N^pre_RV/N^pre,
where N^gd is the total number of ground truth infarct points, while N^pre_RV is the number of predicted infarct points located in the RV, excluding the septum boundary.
Hence, the final inference loss is defined as,
ℒ_inf = ℒ_seg + λ_compactℒ_compact + λ_sizeℒ_size + λ_spaℒ_spa + λ_VAEℒ_VAE,
where λ_compact, λ_size and λ_spa are balancing parameters.
§ EXPERIMENTS AND RESULTS
§.§ Materials
§.§.§ Dataset and Simulation Setup
We collected 49 subjects with paired 12-lead ECGs and multi-view cardiac MRIs from the UK Biobank study <cit.>.
The dataset was randomly divided into 34 training subjects, 5 validation subjects, and 10 test subjects, and each subject has 17 post-MI scenarios.
The biventricular tetrahedral mesh for each subject was converted into PCs and then resampled into coarse and dense versions with 1,024 and 4,096 nodes, respectively.
On these meshes, we imposed simulated infarcts with different locations, sizes, transmural extents, and CV alterations.
During the electrophysiology simulations, a fixed set of RN locations and CV values were utilized.
Specifically, the RNs were placed at seven specific homologous locations based on Cobiveco – four in the LV and three in the RV.
In the LV, they were situated in the mid-septum, basal-anterior paraseptal, and two mid-posterior locations, while in the RV, they were located in the mid-septum and two free wall regions <cit.>.
Two sizes of lateral MI were achieved by halving r_ab and r_rt values for the small lateral MI compared to the large one.
Two transmural extents were set by varying r_tm, which was set as 3 and 0.5 for transmural and subendocardial scars, respectively.
For baseline QRS simulation, the CV values for different directions were set as follows: 65 cm/s along the fiber direction, 48 cm/s along the sheet direction, 51 cm/s along the sheet-normal direction, and 100 cm/s and 150 cm/s for the sparse and dense endocardial directions, respectively <cit.>.
These values were consistent with reported velocities for healthy human myocardium in previous studies <cit.>.
In the simulation of QRS for MI, the CVs in the areas of myocardial scarring and BZ were set to 10% and 50% (another slower CV configuration: 5% and 25%) of the respective values observed in healthy myocardium.
§.§.§ Evaluation
For evaluation, we compared the predicted MI distribution of our proposed automatic method with the gold standard set in the simulation phase.
To evaluate the segmentation accuracy, we calculated the Dice score, precision, and recall of the MI prediction, calculated on the PCs.
Furthermore, we propose a novel evaluation metric called the AHA-loc-score, to assess the accuracy of MI localization using the 17-segment AHA map,
AHA-loc-score = β_c-idδ_c-pre, c-gd + β_idIoU_id + β_c-d(1-d_c),
where δ_c-pre, c-gd indicates whether the AHA index of predicted infarct center is matched with that of ground truth,
IoU_ids calculates the intersection over union (IoU) score of the AHA indices appeared in the predicted and ground truth MI regions,
and d_c refers to the normalized distance between predicted and ground truth infarct centers.
The weights β_c-id, β_ids, and β_c-d have values of 0.5, 0.2, and 0.3, respectively.
§.§.§ Implementation
The framework was implemented in PyTorch, running on a computer with 3.50 GHz Intel(R) Xeon(R) E-2146G CPU and an NVIDIA GeForce RTX 3060.
We use the Adam optimizer to update the network parameters (weight decay = 1e-3).
The batch size is 4, and the initial learning rate is set to 1e-4 with a stepped decay rate of 0.5 every 6800 iterations.
The balancing parameters in Sec. <ref> are set as follows: α=5, λ_KL=0.01, λ_compact=1, λ_size=1, λ_spa=1, and λ_VAE=1.
The simulation of one QRS of MI spent about 5 min.
The training of the model took about 10 hours (300 epochs in total), while the inference of the networks required about 9 s to process one test case.
§.§ Sensitivity Analysis of QRS for Different Post-MI Characteristics
We performed a sensitivity analysis in which we studied the effects of different infarct configurations in the QRS complex.
The aim was to find out which locations and sizes had a significant effect on QRS, and thus to establish the feasibility of the inverse inference task.
To quantify discrepancy between QRS shapes, we employed a global measure, DTW, which compared signals of different lengths with an additional penalty for the difference in QRS duration between the two signals <cit.>.
Furthermore, we introduced four QRS abnormalities reported in literature, i.e., QRS duration prolongation <cit.>, pathological Q-waves <cit.>, poor R wave progression (PRWP) <cit.>, and fragmented QRS (fQRS) <cit.>.
The reader is referred to fig:exp:abnormalQRS_MI_example for illustration of each local QRS criteria of post-MI.
QRS duration prolongation can occur due to the damage to the heart muscle and subsequent changes in electrical conduction of MI.
Pathological Q waves are typically deeper, wider, and longer than normal Q waves, and are usually associated with the loss of electrical activity in the area of the heart affected by the MI.
Specifically, it can be defined as the presence of Q wave with duration ≥ 0.03 s and/ or amplitude ≥ 25% of R-wave amplitude <cit.>.
PRWP refers to the absence of the normal increase in amplitude of the R wave in the precordial leads when advancing from lead V1 to V6 <cit.>.
In the literature, different definitions of PRWP exist <cit.>.
Here, we utilize specific criteria, such as the R wave amplitude of 2 mm or less in the lead V3/V4 and the presence of reversed R-wave progression.
This is determined when the R wave amplitude of V5 is less than that of V6 or the R wave amplitude of V2 is less than that of V1 or any combination of these.
fQRS refers to the presence of multiple small spikes or notches within the QRS complex <cit.>.
It is typically present in the lead corresponding to the location of the infarct zone.
Note that although these QRS abnormalities have been shown to be useful in the diagnosis and prognosis of MI in some studies, there is also conflicting evidence and debate among researchers regarding their clinical significance and usefulness <cit.>.
§.§.§ Sensitivity Analysis: Global QRS Measure
To assess the impact of QRS on the 17 different MI scenarios, we measured the dissimilarity between each of these and the baseline, as well as the dissimilarity between them.
As fig:exp:QRS_dissimilarity shows, the QRS complex showed morphological alterations in most post-MI scenarios when compared to the normal QRS complex.
Particularly, inferolateral, extensive anterior, and apical transmural MI presented more evident alterations compared to others.
One can see a significant decrease in QRS morphology alteration in small lateral MI when compared to that of large lateral MI, especially for subendocardial one.
The orientation and location of the heart within the torso can affect the direction and amplitude of the electrical signals detected on the body surface, which can lead to variation in the QRS complex morphology among different individuals.
Moreover, differences in the anatomy and physiology of the heart itself can also contribute to the variation in QRS morphology.
In the case of lateral MI, the variation in the QRS complex may be more pronounced.
This is because the electrical activity associated with ventricular depolarization needs to traverse a larger distance through the LV myocardium to reach the lateral wall, which can result in changes to the amplitude, duration, and morphology of the QRS complex.
The degree of transmurality presented a noticeable impact on the QRS morphology at all infarct locations, namely transmural scars generally caused more prominent changes in QRS morphology compared to subendocardial scars.
Although the QRS dissimilarities between transmural and subendocardial septal scars were relatively small (DTW^max=0.2 and DTW^avg=0.3), differences in QRS morphology can still be observed, as shown in fig:exp:simulated_QRS_examples.
Despite the influence of transmurality on QRS morphology, the differences in QRS between various infarct locations seemed to be more pronounced than those caused by the extent of transmurality.
This implies that the QRS has greater sensitivity in localizing MI rather than predicting its transmural extent.
The primary QRS morphological difference observed with varying degrees of CV reduction was the QRS duration: 99.5 ms vs. 113.8 ms on transmural large lateral MI.
However, our initial tests presented unexpected QRS simulation results when we significantly reduced the CVs in the MI regions.
This suggests that the personalized CV configuration of infarct areas during simulation requires further investigation in the future.
Most infarct locations were represented on the QRS by leads I, V5, and V6, whereas septal MI was represented by leads V1-V4 and V3-V4 for subendocardial and transmural ones, respectively.
This result is in agreement with those reported in clinical practice <cit.>.
Generally, larger scars tend to result in QRS changes appearing in more leads.
The ability of various QRS leads to accurately detect the location of infarction varied.
This is because the electrical activity of the heart is not uniform, and different leads may have a better view of certain regions of the heart.
Additionally, the location of the infarct and its extent can influence the morphology of the QRS complex in different leads, which can affect their ability to detect the infarct location.
§.§.§ Sensitivity Analysis: Local QRS Measure
The changes in QRS morphology for the 17 MI scenarios were reflected in multiple ways.
Here, we introduced several QRS criteria and compared the contribution of each of these for infarct detection.
We found that apical and inferolateral MI tended to present prolongation of the QRS duration: 124.1 ms and 107.7 ms (apical and inferolateral MI) vs. 90.4 ms (normal).
PRWP mainly occurred in extensive anterior, septal, and apical MI, similar as reported in the literature <cit.>.
Specifically, the R wave amplitude in the septal MI was sometimes flattened, while the R wave of V6 tended to be larger than that of V5 in the apical MI, as fig:exp:simulated_QRS_examples shows.
The prevalence of fQRS was more common in the inferior lead (lead II) compared with the anterior leads (leads V3 and V4) and the lateral leads (leads V5 and V6), similar to the results reported in Liu et al. <cit.>.
The presence of fQRS in lead II and leads V3-V4 indicated inferolateral and extensive anterior MI, respectively.
In contrast, pathological Q wave failed to classify MI from healthy subjects in our simulation system.
§.§ Inference Accuracy of Post-MI Properties
tb:results:MIinference presents the quantitative results of the proposed method, and fig:result:boxplot provides the boxplots of Dice score.
The proposed method obtained the best segmentation and localization performance on the transmural extensive anterior MI (Dice= 0.934 ± 0.028, AHA-loc-score = 0.987 ± 0.007).
Even for the scenarios where there were not notable QRS morphology changes, such as MI in the septum and limited anterior areas, the model still can localize the corresponding infarct (DTW^max=0.4, AHA-loc-score ≈ 0.7).
Nevertheless, the model showed difficulties in detecting lateral (especially for the subendocardial and small size ones, with Dice score of 0.097 ± 0.112) and inferior MI with Dice scores of 0.228 ± 0.252 and 0.173 ± 0.288 for subendocardial and transmural one, respectively.
In general, the segmentation of the transmural MI tended to be more accurate than that of the subendocardial MI (Dice: 0.518 ± 0.347 vs. 0.396 ± 0.271).
This observation aligned with expectations, since transmural MI often exhibit more pronounced and distinct QRS abnormalities compared to subendocardial MI, as proved in previous sensitivity analysis.
As a result, our model can leverage these noticeable differences to identify and segment the affected region accurately.
Nevertheless, their ability to precisely determine the location of the infarction within the myocardium did not vary significantly (AHA-loc score: 0.610 ± 0.343 vs. 0.659 ± 0.339).
This can be attributed to the fact that the localization of MI is not solely dependent on the depth or extent of the infarct.
Furthermore, the accuracy of predicting scars was generally higher than that of predicting border zones (BZs).
This could be because the complex nature of BZs, where the myocardial tissue undergoes a transition from healthy to scarred, introduces additional variability and ambiguity in the QRS signals, leading to a lower prediction accuracy for BZs.
The performance in terms of Dice coefficient, precision, recall and AHA-loc-score was generally consistent.
However, in specific cases like apical, limited anterior, and inferolateral transmural MI, precision may exhibit a slight superiority over the Dice.
Apical MI obtained the highest AHA-loc-score, indicating its accurate and reliable localization.
This could be attributed to the uniqueness of the apical location, allowing for a more precise and unambiguous localization of MI due to the absence of significant interference from neighboring structures.
Figure <ref> provides 3D results of a representative test subject on different scenarios.
One can observe that the 3D visualization agrees well with the quantitative analysis result.
There were outliers appearing in the inferior area for lateral MI detection and vice versa, which suggests that the model had difficulty distinguishing between the lateral and inferior MI areas based on their QRS.
Furthermore, even though extensive anterior and inferolateral MI both covered large areas, the detection of inferolateral MI tended to be more difficult compared to that of extensive anterior MI, which can be further proved in the correlation study of MI volume presented in fig:result:volume_regression.
§.§ Ablation Study
Accurate MI inference goes beyond merely identifying the location of the infarction, but also requires a comprehensive assessment of the extent of infarct tissue.
Therefore, we introduced additional constrains, namely localization constrains (ℒ_spa and ℒ_comp) and an extent constrain (ℒ_size).
To evaluate their effectiveness, we conducted an ablation study by selectively removing them from the proposed framework, as presented in tb:result:ablation_study.
One can see that in most scenarios the proposed method obtained the best performance compared to others.
For example, without localization constrains, the model presented worse performance in identifying septal MI.
Note that septal MI normally presents complexity for detection, due to its unique position and overlapping ECG effects from neighboring regions, such as the anterior and inferior walls.
We observed that the absence of ℒ_comp led to improved Dice in cases of inferolateral and subendocardial limited anterior MI and decreased Dice in cases of extensive anterior MI.
Nevertheless, reduction in outliers observed in the visualization results suggests that ℒ_comp effectively minimizes the occurrence of outliers, leading to more reliable and accurate predictions.
The extent constraint was also crucial, particularly in distinguishing between subendocardial and transmural MI that present different sizes in the same anatomical position.
§.§ Extended Evaluation
§.§.§ Exploring the Detection Limit of QRS for Small Infarct Areas
To investigate what is the smallest infarct area that can be detected from QRS complexes, we employed apical MI as an example and varied the infarct size and retrained the model based on the pre-trained one.
The idea behind this approach is to determine the sensitivity of QRS-based detection methods for small infarct areas, which may have important clinical implications for risk stratification and management of post-MI patients.
Figures <ref> (a) and (c) demonstrate that as the infarct size decreased, the QRS morphological changes also diminished.
This is because a smaller infarct would have a lesser impact on the overall electrical conduction and activation patterns of the heart.
Consequently, the deviations in the QRS, which represent the depolarization of the ventricles, would be less pronounced.
Nevertheless, our method still can extract subtle features from the QRS complex that may be indicative of small infarct areas, as fig:result:QRS_MIsize (b) shows.
This ability was limited until when the Cobiveco apicobasal radius r_ab of scars equaled to 0.1 for apical MI.
§.§.§ Correlation Analysis: Relationship between ECG/ PC Reconstruction and MI Inference Accuracy
To evaluate the robustness of the proposed inference scheme to the reconstruction error, we analyzed the relationship between the reconstruction and inference errors by the proposed method.
The accuracy of PC and ECG reconstruction was calculated as 0.5*ℒ^rec_PC with α=1 and ℒ^rec_QRS, respectively.
The r^2 values of scar/ BZ for PC and ECG-MI inference correlations were 0.002/ 0.006 and 0.008/ 0.009, respectively, indicating no relationship between inference and reconstruction accuracy.
This implies that the accuracy of MI inference using the proposed method was not significantly influenced by the quality of the reconstruction.
This is reasonable, as the proposed method focuses on extracting relevant features from the input data rather than relying solely on accurate reconstruction for MI inference.
Nevertheless, the reconstructions are still necessary as they provide valuable information for the inference.
To demonstrate this, we conducted a comparison by removing the reconstruction steps, and the results noticeably decreased (AHA-loc scores: 0.610 ± 0.343 vs. 0.561 ± 0.338 for subendocardial MI, and 0.659 ± 0.339 vs. 0.585 ± 0.367 for transmural MI), highlighting the significance of incorporating reconstruction in the inverse inference.
§.§.§ Comparison with Conventional MI Inference Method
To demonstrate the efficacy of our approach, we conducted a comparative analysis with the Selvester QRS scoring system <cit.>.
The score criteria have been employed to identify scar location based on QRS phenotypes, such as wave duration (Q or R), wave amplitude (R or S), amplitude ratio (R/Q, R/S, R/R^', or S/S^'), and QRS slurs or notches <cit.>.
...
§ DISCUSSION AND CONCLUSION
In this paper, we have developed a deep computational model to tackle the inverse problem in cardiac electrophysiology, i.e., inferring MI distribution from QRS signals.
Through the integration of anatomical and electrophysiological data, we achieve a comprehensive analysis that incorporates different infarct locations, sizes, transmural extents, and cardiac electrical activity alterations.
By consistently representing the ventricular anatomy in a coordinate reference system, we establish a robust sensitivity analysis framework for studying the association between infarct characteristics and QRS abnormalities.
The sensitivity analysis results have demonstrated significant morphological alterations in the QRS complex for various post-MI scenarios, particularly inferolateral, extensive anterior, and apical MI.
These findings suggest that the involvement of large areas of damaged heart muscle leads to pronounced changes in QRS morphology.
Furthermore, the analysis emphasizes the impact of transmurality on QRS morphology, namely transmural MI presents more prominent changes compared to subendocardial MI.
However, the differences in QRS between various infarct locations can be more pronounced than those caused by the extent of transmurality, indicating the greater sensitivity of QRS in localizing MI rather than predicting its transmural extent.
The analysis further highlight the importance of lead selection in accurately detecting the location of infarction.
Overall, the sensitivity analysis provides valuable insights into the relationship between infarct characteristics and QRS abnormalities, enhancing our understanding of the complex interplay between infarct characteristics and electrophysiological features.
The proposed method can effectively segment and localize MI, even in scenarios with limited QRS morphology changes, demonstrating its feasibility of developing CDTs for MI patients.
The results of the ablation study emphasize the importance of the localization and extent constraints in accurate MI inference.
The proposed method exhibits the ability to detect small infarct areas, although its sensitivity is limited, as proved in our extended study.
The correlation analysis demonstrates that while incorporating reconstruction in the inference process is important, the accuracy of MI inference is not significantly dependent on the quality of reconstruction.
To conduct a sensitivity analysis of MI properties, we intentionally select consistent infarct location, size and transmural extent for each subject.
While it ensures a controlled comparison, it may have led to a limited evaluation of MI inference.
We conduct a small test by randomly selecting infarct for each subject and only obtain reasonable good results on few cases.
This outcome is expected because randomly simulating a single scenario for each subject limits ability of the proposed model to learn and generalize across different infarct characteristics.
In order to improve performance, in the future a more diverse and comprehensive dataset with a wider range of infarct scenarios should be used to train the model.
Note that this work is an initial study, and there are several limitations that need to be acknowledged.
Firstly, this study assumes a known set of RNs and fixed CVs for all subjects, which may not fully capture the complexity and heterogeneity present in real-world healthcare data.
Therefore, further research is needed to personalize these activation properties based on individual patient characteristics and specific healthcare settings.
Secondly, we only consider cardiac anatomical information and electrode nodes while disregarding the full torso geometry.
The inclusion of torso geometry could provide valuable insights into its influence on QRS patterns.
By incorporating full torso geometry in our future work, we can gain a more comprehensive understanding of the factors influencing QRS patterns and improve the accuracy of our predictions and interpretations.
Thirdly, this study focuses solely on the QRS complex, rather than considering the entire ECG signal.
Applying the analysis to the whole ECG signal would provide a more comprehensive assessment but may require significant computational resources.
To address this limitation, future research could explore computationally efficient surrogate to replace the expensive simulation model.
Finally, while the developed CDTs can provide valuable insights into the mechanisms of MI, they are based on simplified assumptions about the heart and may not capture all aspects of the complex interactions between cardiac structures and functions.
Given the limitations, particularly in the simulated dataset used, this can only serve as a proof of concept until validation on the clinical data can be performed.
ieeetr
|
http://arxiv.org/abs/2307.04561v1 | 20230710135100 | Performance comparison of timing-based anomaly detectors for Controller Area Network: a reproducible study | [
"Francesco Pollicino",
"Dario Stabili",
"Mirco Marchetti"
] | cs.CR | [
"cs.CR",
"cs.PF"
] |
Both authors contributed equally to this research.
[email protected]
0000-0002-2421-1852
[1]
[email protected]
0000-0001-6850-334X
[email protected]
0000-0002-7408-6906
University of Modena and Reggio Emilia
Via P. Vivarelli, 10
Modena
ITA
This work presents an experimental evaluation of the detection performance of eight different algorithms for anomaly detection on the Controller Area Network (CAN) bus of modern vehicles based on the analysis of the timing or frequency of CAN messages.
This work solves the current limitations of related scientific literature, that is based on private dataset, lacks of open implementations, and detailed description of the detection algorithms. These drawback prevent the reproducibility of published results, and makes it impossible to compare a novel proposal against related work, thus hindering the advancement of science. This paper solves these issues by publicly releasing implementations, labeled datasets and by describing an unbiased experimental comparisons.
Performance comparison of timing-based anomaly detectors for Controller Area Network: a reproducible study
Mirco Marchetti
August 12, 2023
==========================================================================================================
§ INTRODUCTION
Automotive Cyber Security is a relatively new research area, that is rapidly growing to encompass many different security issues: from the design of security countermeasures aiming to deter cyber attackers, to detection mechanisms trying to identify ongoing attacks and reactive countermeasures to contain and respond to malicious activities. One of the most active research areas in this field is the design of novel intrusion detection systems applied to the Controller Area Network (CAN) bus, one of the prominent network technologies used to interconnect Electronic Control Units (ECUs) deployed within modern vehicles. Several intrusion detection algorithms have already been proposed <cit.>, and it is possible to classify them based on the features of the CAN communication that they use to identify anomalies and attacks.
This paper focuses on the subset of CAN intrusion detection systems that identifies anomalies by analyzing the timing with which CAN messages are sent over the CAN bus. This detection approach is promising, since many powerful attacks (such as fuzzing <cit.>, injection <cit.> and Denial of Service <cit.>) are based on the injection of malicious messages on the CAN bus in addition to the legitimate traffic. Our main goal is to propose an unbiased comparison of the detection performance of time-based anomaly detectors for the CAN bus. We stress that this task is much more complex than it might seem. A direct comparison among different detection algorithms published in the scientific literature is hindered by many different issues. Different papers use different detection metrics to evaluate the performance of the proposed solution. Each work uses a proprietary dataset for carrying out the experimental performance evaluation, and in most of the cases the dataset is not publicly released, it is only described at a very high level, and lacks all the details required to replicate the attacks. Moreover, novel solutions designed to detect anomalies in CAN communication are not compared to existing solutions, or are compared with naive detection metrics to demonstrate that a more complex solution is better in the detection task. In the vast majority of cases the authors of a scientific paper do not disclose a reference implementation, thus requiring other researchers to re-implement the proposed algorithm. More often than not, the paper only includes a high level description of the proposed algorithm that lacks many relevant details that are actually required for a real implementation. Finally, several papers omit important aspects related to the tuning and training of the proposed algorithm that have a strong impact on their detection performance.
This work tackles three of the important limits in this field of research. The first major contribution is the empirical and unbiased comparison of eight different time-based CAN anomaly detectors over two different datasets, an original one and one that is already publicly available. The second contribution to the state-of-the-art is to foster the reproducibility of similar studies. We publicly release all the reference implementation of the detectors considered in this study, together with the novel dataset used to tune and test the detection algorithm. This contribution allows all researchers and industry practitioners to fully replicate our research results, validate the correctness of our reference implementations, assess the quality of the dataset, and easily compare a novel proposal with respect to the state-of-the-art. Finally, we highlight the limits of publicly available datasets used for the experimental evaluation of existing solutions against the dataset presented at first in this work. This analysis also demonstrate that it is crucial to identify a comprehensive threat model to demonstrate how anomaly detectors are effected by attacks, such as the effects of different cycle times and injection frequency on the overall performance evaluation.
The remainder of the paper is structured as follows. Section <ref> discusses the main characteristics of the CAN bus and of CAN messages that are required to understand this work. Section <ref> presents the related work and describes in detail the eight detectors considered in this paper. Section <ref> describes the dataset used throughout the paper to validate and test the detection algorithms. Section <ref> presents our reference implementation of the eight detectors, focusing on the additional assumption and design choices that are missing in the original papers. Section <ref> compares the detection performance of our reference implementations against several attack instances, involving different CAN messages and injection frequencies. Finally, Section <ref> concludes the paper.
§ A PRIMER ON THE CONTROLLER AREA NETWORK
The Controller Area Network (CAN) is one of the communication protocols used between the Electronic Control Units (ECU) deployed within the vehicle <cit.>. The CAN bus is designed to enable communication between the nodes without requiring a host computer. CAN is one of the most deployed networking protocols for in-vehicular communications due to its high resilience to electromagnetic interferences and its low implementation costs.
It is a broadcast-message based communication protocol, and transmission on the CAN bus uses a bit-wise arbitration method for contention resolution. When two different nodes start transmission of a frame at the same time, the node with the highest priority continues sending the frame without interruption, while the other node backs off and re-tries transmission at a later time.
The CAN protocol defines 4 different types of frames with different usages, but only the data frame is used to transmit data between ECUs.
The main fields composing the CAN data frame are the identifier (ID), the data length code (DLC), and the payload (data). The ID identifies the CAN data frame and the content of its data field. Each ECU transmits only a limited set of messages with a particular ID while receiving ECUs use the value of the ID field to select data frames relevant for their functioning. A message with a particular value of the ID field is always sent by only one ECU.
The ID field is also used for arbitration of the CAN messages, where lower values of this field denote messages with higher priority. In the current standard for basic CAN communication, the ID field is defined with a size of either 11 (standard format) or 29 bits (extended format). Figure <ref> shows the structure of an extended CAN message. Note that the extra 18 bits of the extended format (ID #2) are encoded separately from the 11 bits of the standard format (ID #1) for backward compatibility.
The DLC field encodes the number of bytes composing the data field, and has a size of 4 bits. Since the maximum length of the data field is 8 bytes, valid DLC values ranges from 0 to 8, while values from 9 to 15 are left unused.
The data field encapsulates the information that the sender ECU transmits to other ECUs on the network. The data field has a variable size (from 0 to 8 bytes) and usually packs several different signals. The CAN standard leaves complete freedom to the car manufacturers about the structure, number, encoding, and semantic of these signals. Hence, without having access to the formal specifications of the CAN messages for a particular vehicle model, the data encoded in the data field can only be interpreted as an opaque binary blob.
Between two consecutive CAN data frames, an interframe space is required. The interframe space consists of at least three consecutive recessive bits, called interframe bits. Following the interframe space, if a dominant bit is detected, it is considered as the start-of-frame of the next data frame.
§ RELATED WORK
Since the introduction of microcontrollers to in-vehicle networks, the automotive domain is considered one of the most prominent examples of Cyber-Physical Systems (CPSs).
The main characteristic of an automotive CPS is the central role of the automotive in-vehicle network, which is used by the ECUs to exchange data for their operational needs. With the increasing development of advanced driver assistance systems (ADAS), cyber-security researchers started to demonstrate attacks to these features. Miller and Valasek <cit.> demonstrated the consequences of an Internet-based, remote cyber-attack to a modern, unmodified, licensed vehicle, with a huge media coverage. Since then, many researchers started to develop Intrusion Detection Systems (IDS) by applying concepts borrowed from classical computer networks to the in-vehicle networks <cit.>.
Some of these solutions focus on the analysis of the low-level characteristics of the ECUs <cit.>, other solutions focus on the analysis of the in-vehicle network communications. Of this last group of solutions, security researchers have developed intrusion detection systems based on the statistical analysis of the content of the CAN bus <cit.>, while other solutions are focused on the analysis of the content of the CAN data frames <cit.>.
Most of these works are only focused on a particular aspect of CAN communication, thus increasing the complexity of comparing novel solutions with existing literature. Moreover, only a handful implementations of the current state-of-the-art is publicly available, preventing the comparison of a novel solution with the existing ones.
The scope of this work is to tackle this problem by proposing an unbiased comparison of existing solutions designed to detect anomalies on CAN communication using the timings of the CAN messages as their detection feature. We chose this group of solutions since they represent one of the most analyzed group of solutions for the anomaly detection task on CAN communications, and they can be deployed as a software-only solution, without requiring dedicated hardware components to their functioning.
The first work focused on this aspect is presented in <cit.>, while following works are focused on either frequency <cit.> or timing analysis <cit.>. All these works are based on the assumption that most CAN messages are sent periodically on the network within a fixed time interval, hence it is possible to exploit this feature to detect messages that do not follow the expected timing. However, these detection methods are only applicable to cyclic messages and cannot detect any anomaly if the attack targets a non cyclic message.
In the following sections, we describe the anomaly detectors considered for the experimental evaluation presented in this work. For readability purpose, we associate each algorithm with a label composed by the last name of the first author and the last two digits of the publication year. For clarity, we uniformed the names of the main parameters, variables, and attack scenarios described in the original works.
§.§ Otsuka14
In the work presented in <cit.> the authors propose an anomaly detection algorithm that uses a delayed-decision cycle detection method to detect (and possibly prevent) spoofing attacks.
The algorithm presented in assumes that, since data frames are transmitted at a constant cycle time ct, any modification of the normal behavior of CAN communication should change its cycle time.
When an ECU receives a data frame with the same CAN ID of the previous data frame and a cycle time less than ct + δ (where δ is a threshold parameter), the ECU holds the data frame until the expected time T is passed. In case another message with the same ID is received in the waiting period, then the ECU is able to detect an ongoing attack and the data frames are not processed.
The algorithm presented in is trained with real CAN data and tested against the injection of two different message IDs, one exhibiting a stable cycle time ct while the other exhibits a non-cyclic pattern.
The results are presented by means of False Positive Rate (FPR) and False Negative Rate (FNR). Despite the comparison of the detection performance with the previous state-of-the-art is not discussed, we remark that is, to the best of our knowledge, the first paper presenting a detection algorithm based on the timing analysis of CAN communications.
The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message.
§.§ Taylor15
In the work presented in <cit.>, the authors design an anomaly detection algorithm for the detection of anomalies based on inter-packet time over a sliding window. In particular, the algorithm presented in uses test values over consecutive CAN flows (defined as a sequence of CAN data frames) for its detection task. Each test value is evaluated as a t-test, comparing the mean time difference with its historical value (i.e. the cycle time ct of the same CAN ID). The algorithm then uses the test values for the evaluation of the anomaly score (defined as a logarithmic sum) over a sequence of scores, to identify anomalies in the CAN communication.
This algorithm is tested against the injection and the removal of CAN messages with different attack duration, ranging from 100 milliseconds to 1 second. For the message injection attack and for each duration, the injected packed is inserted once, five, and ten times its normal frequency. The detection performance are presented by means of the Receiver Operating Characteristics (ROC) and the Area Under Curve (AUC) measure. The authors do not compare the algorithm with previous works but only with a One-Class Support Vector Machine classifier trained on the same data.
The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message.
§.§ Cho16
The authors of <cit.> design and test a Clock-based Intrusion Detection System (CIDS) for CAN communications. This algorithm (labeled as ) leverages the intervals of periodic in-vehicle messages for fingerprinting ECUs. Then, the fingerprints are used for constructing a baseline of ECUs' clock behaviors based on the Recursive Least Squares (RLS) algorithm. The Cumulative Sum (CUMSUM) is then computed on this baseline to detect anomalies in message timings thanks to an adaptive threshold. The algorithm is able to detect anomalies inside a window of size W dataframes, and does not identify a single dataframe as malicious.
This algorithm is presented in two versions: the per-message detection and the message-pairwise detection. The latter version (message-pairwise detection) is based on the assumption that the number and identity of CAN messages generated from the same ECU are known. We remark that these information can only be known a-priori by either having access to the DBC or by apply modern mapping techniques such as <cit.>. Since we only relies on CAN log traces and we do not have access to the DBC of our test vehicle, only the per-message detection version can be applied to our datasets.
The algorithm is tested against the injection of CAN messages (called fabrication attack in the original work) on both a CAN bus prototype and a real vehicle. On the real vehicle scenario, authors target a message with a cycle time ct equals to 20ms, however the original paper does not contain any information about the injection frequency. Moreover, the paper does not include performance comparison against previous work.
The computational overhead of the detection algorithm is equal to 𝒪(N^2) for each window, where N is the size of the data matrix. We remark that this computational complexity is equal to the one of the RLS algorithm used for constructing the baseline of ECUs' clock behaviors.
§.§ Gmiden16
In the work presented in <cit.>, the authors design a lightweight intrusion detection algorithm based on the analysis of the frequencies of the CAN messages.
In particular, the algorithm presented in uses the frequency of a CAN message to detect anomalies in the CAN communication. Upon reception of a message, the algorithm compares the time difference Δ_t between the new message with the previous one sharing the same ID, and generates an anomaly in case the time difference is less than half the estimated cycle time ct.
In <cit.> the algorithm is only presented in a theoretical way, hence there is no test of the algorithm against any attack scenario, nor its detection performance are compared with previous works.
The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message.
§.§ Song16
In the work presented in <cit.>, the authors design a detection algorithm based on the inter-arrival times of CAN messages. In particular, evaluates the time difference Δ_t of messages with the same CAN ID and uses the time difference Δ_t for the detection of two attack scenarios.
The first attack scenario considered in is the injection attack, in which messages are injected on the CAN bus randomly. To detect this attack scenario, the algorithm compares the time difference Δ_t with the cycle time ct and raises an anomaly if Δ_t is lower than half the expected cycle time ct. We remark that this detection algorithm appears to be exactly the same as the one proposed in .
The second attack scenario is a Denial-of-Service attack, in which a message with a fixed value of the ID field is injected with a high frequency. For the detection of this attack scenario, the algorithm increments the value of a counter every time the Δ_t is lower than 0.2 milliseconds, and raises an anomaly if the counter value is higher than a given threshold.
For the test of the algorithm, they used data gathered from a real vehicle and simulated the attacks by injecting messages for a random time window ranging from 5 to 10 seconds. In the injection attack scenario, they injected a message with twice, five, and ten times the original frequency, while in the DoS attack scenario the injection is fixed at 2000 messages per second, testing threshold values of 1, 2, 3, and 5.
The detection results are evaluated by means of detection accuracy, but no comparison with previous work is presented.
The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message.
§.§ Moore17
In the work presented in <cit.>, the authors describe a frequency-based anomaly detection algorithm.
The algorithm uses the time difference Δ_t between consecutive messages with the same ID value for its detection purposes. In particular, uses the sequence of Δ_t of each message ID for the identification of the maximum observed error m, which is the maximum absolute difference between the expected cycle time ct and the observed Δ_t.
Upon reception of a message, the algorithm compares the Δ_t from the previous message with a threshold value defined as ct · 0.15 + m. If three consecutive values of Δ_t are found outside the defined threshold, then an anomaly is raised.
The algorithm is tested against both injection and Denial-of-Service attacks, and presented the results by means of True Positive Rate (TPR), False Positive Rate (FPR), and False Negative Rate (FNR). No comparison with previous work is described.
The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message.
§.§ Stabili19
In the work presented in <cit.>, the authors designed an anomaly detection algorithm for the detection of missing messages from CAN communications. In particular, the algorithm presented in evaluates the cycle-time (ct) of each CAN ID to build its detection model.
In the detection phase, the cycle time is used in conjunction with a configuration parameter k (defined in the validation process for each ID) to detect missing messages from the CAN bus. A message with a particular ID is considered missing if it is not seen on the CAN bus for at least ct × k milliseconds.
This algorithm is tested against two similar attack scenarios, the ECU shutdown attack (in which messages with a particular ID are removed from the CAN for a period of time that ranges from 10 to 120 milliseconds) and the ECU inhibition attack (in which all messages with a particular ID are removed from the CAN bus). The detection performance are presented by means of the F-measure and compared with other detection algorithms, despite only is used for the comparison with time-based anomaly detection algorithms.
The computational overhead of the detection algorithm is equal to 𝒪(n) for each received CAN message, where n is the number of message ID found in the monitored CAN section.
§.§ Olufowobi20
In <cit.> the authors presented SAIDu-CANT, a specification-based intrusion detection system (IDS) using anomaly-based supervised learning.
The detection model is learned in the first phase of the algorithm, and requires a clean CAN data trace for learning different parameters that will be later used in the detection phase. The learned parameters are the minimum and maximum inter-arrival time of each CAN message ID f_i,min, f_i,max , the estimated message period P̃_̃ĩ=f_i,min and the release jitter J_i=f_i,max-f_i,min.
In the detection phase the algorithm monitors the arrival time of each CAN IDs and compares it against the detection model. If a message is found outside the acceptable interval defined by the specification of the detection model, than it is labeled as anomalous.
The algorithm is tested against injection attacks on two different datasets. One dataset is composed by CAN traces gathered from two different Sedan vehicles, while the second dataset contains data from a single vehicle. Of these datasets, only the latter is publicly available[<https://sites.google.com/a/hksecurity.net/ocslab/Datasets/CAN-intrusion-dataset>].
The detection results are presented by means of True Negative (TN), True Positive (TP), False Positive (FP), False Negative (FN), accuracy, recall, precision, and F1 score. No comparison with previous work is presented, despite the authors showcased the detection performance of the algorithm against two other detection mechanisms directly described in the original work.
The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message.
§ DATASET DESCRIPTION
This section describes the dataset used for the training and the test of the detection algorithms. We used two different datasets for testing and training the detection algorithms. The first dataset is gathered from the CAN bus of an unmodified, licensed 2016 Volvo V40 Kinetic, and is called the Ventus dataset. The second dataset is the OTIDS <cit.> dataset, which is gathered from a KIA Soul.
§.§ Ventus dataset
The CAN data is recorded by physically connecting a laptop to the On-Board Diagnostic (OBD-II) port with a PCAN-USB adapter by Peak System <cit.> and a D-Sub to OBD-II cable. The high-speed CAN bus segment exposed on the OBD-II port of the vehicle contains data related to the powertrain, hence it is possible to access to CAN data frames exchanged by the ECUs to control the dynamic of the vehicle.
The Ventus dataset is composed by the clean and the infected sections. The former one is used for training and validating the detection algorithms, while the latter is used for the performance evaluation of the detection algorithms. For the generation of the infected dataset, we build a threat model based on the attack scenarios considered by the analyzed algorithms. The final dataset is publicly available at <cit.>.
§.§.§ Clean dataset
The clean dataset is composed by 7 different CAN traces, including more than 8 million CAN messages corresponding to approximately 90 minutes of CAN traffic. The CAN traces are gathered in different driving sessions performed on different road types (urban, suburban, and highway), traffic conditions, weather conditions, and geographical areas (plain, hill, and mountain). The CAN traces include ID, DLC, and payloads of each CAN data frame associated with its relative timestamp.
The clean dataset includes 51 different message IDs, each one characterized by its own cycle time. The cycle time of each message ID is available to car makers and their suppliers in the DataBase for CAN (DBC) file, which is used to describe details of CAN communications. However, this file is kept confidential, and is not publicly available. Since the cycle time of each message might influence the detection outcome, we need to estimate the cycle times of the messages in the clean dataset for the definition of multiple attack scenarios, each one targeting a message with a different cycle time. The cycle time of each message is evaluated as the mean value of the inter-arrival times between two consecutive messages with the same ID, and is rounded to the nearest millisecond.
Figure <ref> shows on the y-axis the cycle time ct (expressed in milliseconds) evaluated for each message ID (depicted on the x-axis) on the clean dataset. As shown in Figure <ref>, the clean dataset is composed by 49 messages exposing a cyclic behavior, while 2 messages are identified as non-cyclic. The 49 cyclic messages can be grouped in 17 different cycle time classes, ranging from a minimum of 10 milliseconds (IDs 0x8 and 0x10) to a maximum of 1 second (ID 0x581). We remark that these results are achieved on the clean dataset at our disposal. We empirically verified that the cycle time evaluated on a single trace of the clean dataset present extremely low variance compared with the cycle time evaluated on the other traces, hence we are confident that the results presented in Figure <ref> are representative of the real cycle time of the messages gathered from the same vehicle model.
§.§.§ Infected dataset
The threat model used for the generation of the infected dataset is built on the attack scenarios described in the related work <cit.>. The threat model is composed by two different attack scenarios, the message injection and the message removal attacks. The attack traces are generated in a laboratory environment for safety reasons. The laboratory setup is composed by a laptop computer, a Raspberry Pi 4 board, and a CANPico <cit.> device. The CAN bus is implemented through a breadboard. The laptop is connected via a PEAK CAN-USB device used to record the content of the CAN bus, the Raspberry is connected to the CAN bus via a CAN shield and is used to replay the normal traces gathered directly from our vehicle. The CANPico device has an integrated CAN transceiver and is connected directly to the CAN bus to generate the attacks. Since all transmitting devices are connected to the CAN bus via a CAN transceiver, re-transmissions, delays, arbitration and,
in general, all the low level details that might have been
affected by simulation artifacts are handled directly by the transceivers.
Since the algorithms considered in this work are based on the analysis of the timings of the CAN messages, each attack scenario is replicated by targeting 10 different messages, each characterized by a different inter-arrival time. The list of the selected message IDs and their cycle-time are presented in Table <ref>.
Message injection
The message injection attack scenario is used to inject messages on the CAN bus to subvert its normal behavior, by exploiting modern drive-by-wire capabilities such as automatic emergency braking, lane assist, park assist, adaptive cruise control, automatic transmission, and other similar features.
The message injection scenario comprises different attacks, such as the replay attack, in which a message already seen in the CAN communication is injected at a later time on the network, the fuzzing attack, in which messages with altered field values are injected to study the consequences on the system, and the denial-of-service, in which messages with high priority are injected with a very high frequency to prevent the delivery of normal CAN data frames.
For the aim of our analysis, we do not focus on both fuzzing and denial-of-service for the following reasons:
* in case of a fuzzing attack to the ID field of the CAN data frame, the detection algorithms would fail to detect any ongoing attack since it is not possible to build the detection model for a never observed message;
* the fuzzing attack on the data field of the CAN data frame (with a legit message ID) can be considered as a replay attack on the same message;
* the denial-of-service attack scenario can be analyzed by considering a replay attack with a high frequency of injection.
Hence, for the aforementioned reasons, we only focus on the replay attack injection scenario.
The replay attack is conducted by injecting on the CAN communication a target message (usually selected after an initial phase of reverse engineering of the values encoded in the payload) with a particular injection frequency.
The final injection attack scenario considered in the experimental evaluation is composed by 50 different attack instances, in which each of the 10 selected message IDs is injected with a frequency of 1, 10, 25, 50, and 100 messages each second. The injected messages are equally distributed inside the 1 second time window. Each attack scenario is simulated on all the traces of the clean dataset, for a total of 350 CAN traces.
Removal attack
The removal attack scenario is used to remove messages from the CAN bus to subvert its normal behavior, preventing the ECUs from receiving data required for their functioning.
The removal attack is composed by two different attack scenarios, as already presented in <cit.>: the ECU shutdown and the ECU inhibition attacks. However, since the only difference between the two attacks is their duration, which does not impact on the overall performance of the detection algorithms, in this work we only focus on the ECU inhibition scenario, in which target messages are completely removed from the CAN bus.
The final removal attack scenario considered in the experimental evaluation is composed by 10 different attack instances, in which each of the 10 selected message IDs is completely removed from the CAN communication. As for the previous attack scenario, this attack is also simulated on all the traces composing the clean dataset, for a total of 70 CAN traces.
§.§ OTIDS dataset
The OTIDS dataset is gathered from an unmodified, licensed KIA Soul vehicle <cit.>. The OTIDS dataset is constructed by logging CAN traffic via the OBD-II port from a real vehicle while performing message injection attacks.
The OTDIS dataset is composed by 4 different traces, one of them representing the clean dataset (“Attack free state”) used for training of the algorithm, while the other traces represent 3 different injection scearios. We also remark that all the traces composing the OTIDS dataset contain both data frame and remote frame messages. Since remote frames are not used in the algorithms tested in this paper and might change the outcome of the training process, we pre-processed the traces of the OTIDS dataset to leave only CAN data frames.
§.§.§ Clean dataset
The clean part of the OTIDS dataset is composed by a single trace, with a little more than 1.4 million CAN messages corresponding to approximately 10 minutes of CAN traffic. The clean CAN trace include the ID, DLC, and payloads of each CAN data frame associate with its relative timestamp. The clean part of the OTIDS dataset includes 45 different message IDs, each one characterized by its own cycle time. As for the Ventus dataset, we extracted the cycle times of each message from the clean trace (with the same process already described in <ref>), and the results of the analysis is shown in Figure <ref>.
From the results shown in Figure <ref>, all the 45 IDs found in the OTIDS dataset exhibit a cyclic behavior, with a minimum of 9 milliseconds (IDs 0x153, 0x164, and 0x220), to a maximum of 1 second (IDs 0x34, 0x42, 0x43, and 0x44). In the OTIDS dataset we found 9 different cycle time classes.
§.§.§ Infected dataset
The OTIDS dataset assumes a threat model extremely different from the one considered in the Ventus dataset. We remark that the OTIDS infected dataset is composed by three different types of Injection attacks:
* Fuzzy: in the fuzzy attack scenario, messages of spoofed random CAN ID and data are injected in the CAN communication;
* Denial-of-Service: in the DoS attack scenario, messages with a CAN ID set to 0x0 are injected in the CAN communication with a high frequency;
* Impersonation: in the impersonation attack scenario, valid messages with ID 0x164 are removed from the network and the attacking node injects messages with the same ID.
In case of fuzzy and impersonation attacks, the injection starts after 250 seconds of normal CAN communication, while in case of the DoS attack scenario the attack starts at the beginning of the trace.
§ IMPLEMENTATIONS OF THE DETECTION ALGORITHMS
We observe that none of the related work considered in this paper is distributed together with a reference implementation. Moreover, several papers neglect many relevant details that are actually required to implement the proposed detection algorithm. In this section, we describe the additional assumptions used for the implementation of the detection algorithms. All assumptions are motivated and based on the maximization of the detection capabilities of the algorithms. We remark that all our reference implementation are publicly available at <cit.>, thus allowing researchers to easily replicate our experiments and benchmark novel time-based detection algorithms with respect to the state-of-the-art.
§.§ Implementation of Otsuka14
For the implementation of the detection algorithm we used the estimated cycle times (see Figures <ref> and <ref>) as the mean reception cycle used in the detection process. We remark that the original work only considers 2 IDs, of which only 1 is cyclic. The cyclic ID has a maximum deviation with respect to the mean cycle time of 2%, while the other ID has a maximum deviation of 30%. Based on these results, authors set a fixed detection threshold of δ = 5% from the expected cycle time. Hence any message received outside the valid time range of [ct^ID-5%, ct^ID+5%] is considered anomalous. The value of the threshold δ is defined as the threshold value that minimizes the number of false positives in the validation process.
We replicated the same experimental analysis on both datasets, and the results are presented in Figure <ref> and <ref> for the Ventus and OTIDS dataset, respectively. The two figures shows the deviation from the evaluated cycle time for each message ID. The IDs of the messages are sorted by their cycle time in ascending order (left to right). The results depicted in both Figures <ref> and <ref> show that messages found on the CAN bus more often exhibit a higher deviation from the expected cycle time.
The value of δ that minimizes the overall number of false positives on the Ventus dataset is experimentally evaluated to be δ = 4%. While testing the best value of δ to minimize the false positives on OTIDS however, we found out that by increasing the value of δ improves the output of the validation results. However, since higher values of δ introduce a larger time window in which messages are not considered anomalous, the value used in the experimental evaluation is fixed at δ = 25%. In the experimental evaluation presented in the next section we configured the algorithm with these values for its detection purposes.
Moreover, the algorithm presented in is designed to discard the malicious CAN messages by using a waiting mechanism. With this mechanism, all the messages with the same CAN ID received before ct + δ are held and discarded as soon as another message with the same ID is received after ct + δ. However, this mechanism is based on the assumption that at least the first received message for each ID is legit, and its detection performance are highly affected in case this assumption is violated. To this aim, our implementation of considers all the messages with the same ID received before ct + δ as a single case: in case at least one of the held messages is an injected frame then a single anomaly is raised, while if no one of the held messages is anomalous only one false positive is considered. This design choice minimizes the false positive rate of this algorithm in case of high frequency injection attacks, thus optimizing its detection results.
§.§ Implementation of Taylor15
For the implementation of we used the cycle times estimated on the clean datasets as the historical mean required by the algorithm for the t-tests. For the experimental evaluation of the detection performance of we follow the same assumptions described in <cit.>.
We remark that in the messages with a cycle time below 50ms are representative of more than 90% of the CAN IDs, while in the Ventus and the OTIDS datasets these messages are only 61.22% and 53.33% respectively of the whole datasets (30 out of 49 and 24 out of 45 cyclic messages). This difference does not impact the detection performance of the algorithm, however this method is applicable only to messages having a cycle time below 50ms, hence in both datasets the algorithm has limited applicability compared to the original paper.
Moreover, in the original work where is presented there are some missing details that prevent it from being directly implemented, hence some additional assumptions are made also for this algorithm.
The first additional assumption is focused to the number of sequences 𝒮_q used for the evaluation of the anomaly score A(𝒮_q). This is a crucial aspect for the evaluation of the detection performance of the algorithm, since it impacts directly on the detection performance. As an example, consider a scenario in which a value of 𝒮_q = 10 is used, i.e. the anomaly score is evaluated every 10 t-tests. As described in <cit.>, each item of the sequence is the value of the highest t-test in a window of 1 second. Hence, in the considered scenario, an anomaly score A(𝒮_q) evaluated on a sequence of 10 t-tests implies that the algorithm is able to detect a single anomaly within a 10 seconds time window. This design choice would prevent a fair comparison between the detection performance of and the other algorithms, that are able to flag many different anomaly in a similar time window.
We tested different values of 𝒮_q for the evaluation of the anomaly score, from a minimum of 𝒮_q = 2 up to the whole series of t-tests. At the end of this analysis we discovered that the mean value of the A(𝒮_q) is minimally affected by the number of sequences used for its evaluation. Hence, we chose the value of 𝒮_q = 2 to allow the detection of anomalies in the smallest possible time window.
The second additional assumption is focused on the definition of the detection threshold. In <cit.> this aspect is not discussed since the presented evaluation is focused on the analysis of the ROC and AUC measures, which are threshold-independent. However, for the performance evaluation we need to define a threshold value used to distinguish between normal and anomalous time windows.
As a first step in the definition of the threshold value we analyzed the distribution of the anomaly scores on the clean datasets, and the results are shown in Figure <ref> for the Ventus dataset and in Figure <ref> for the OTIDS dataset.
From the analysis of the distribution of the anomaly scores presented in Figures <ref> and <ref> it is possible to design two different detection mechanisms for each dataset.
The first one is based on the analysis of the distribution range across the different traces, and defines a threshold value higher than the maximum anomaly score evaluated in the validation process. As an example, by considering the results achieved on the Ventus dataset depicted in Figure <ref>, the anomaly scores are always below 2.0, hence it is possible to use this value as the threshold for the classification of the anomalies. In the detection process, any A(𝒮_q) ≥ 2.0 is considered as an anomaly and all values below the threshold as considered legit values.
The second detection mechanism is based on the similarity of the distributions with the distribution of the normal function, hence it is possible to define a threshold using the mean value and the standard deviation of the anomaly scores, as already proposed in <cit.>. Anomaly detection methods based on the similarity with the normal distribution often define the normal threshold as [μ - 3 ×σ, μ + 3 ×σ], and consider any outside value as anomalous. However, we remark that only 99.68% of the distribution is covered with a value of k = 3, hence an anomaly detector based on this detection model introduces 0.32% of false positives.
Since this last detection method introduces false positives that negatively impact the performance evaluation of this detector, our implementation of uses a fixed detection threshold of 2.0 for the Ventus dataset and of 1.0 on the OTIDS dataset, to distinguish between valid and anomalous time windows.
§.§ Implementation of Cho16
The detection algorithm uses the estimated clock skew for the detection of attacks to the CAN communications. The algorithm used for the clock skew estimation is described in the original work (Algorithm 1) and, despite the pseudo-code is well documented and described, there are a few missing information that require additional assumptions.
The most critical assumptions are related to the initialization of different parameters used for the evaluation of the identification error. Since the identification error is used for the detection task, it is critical to initialize these values correctly. The three most impacting parameters are the size of the window N, the initial value of the parameter P used in the procedure, and the number of standard deviations κ. While the size of the window N only impacts the detection delay of the algorithm, the initial value of the parameter P heavily impacts on the computation of the identification error, thus impacting the detection capabilities of the algorithm. In an example described in the original work N and κ are initialized to the values of 20 and 5, respectively. However, the value used to initialize P is not disclosed. We tested different combinations of N, P, and κ on the clean datasets, aiming to minimize false positives false positives. The final values evaluated on the Ventus dataset are N = 10, P = 0.05, and κ=0.1, while on the OTIDS dataset the lowest number of false positives is given by the combination of N = 5, P = 0.001, and κ=2.5. We remark that these values are identified after a long process of testing on the clean dataset, since there is no description on how to identify the best parameters in the original work.
Moreover, since the threat model considered in assumes that the attack starts after 420 seconds of normal CAN communication, and the attacks on the Ventus dataset are generated from the beginning, we modified our implementation to allow to evaluate the clock skew and identification errors on the clean base trace used for attack generation. This allows to learn the clock skew and identification errors on legit values, which are then used as reference against the attack scenarios.
§.§ Implementation of Gmiden16
The description of the detection algorithm presented in its original work is complete of all the details required for its functioning, hence we were able to produce a reference implementation that comply with the original design without the need for additional assumptions or design choices. In our version, we used the cycle time evaluated on the clean dataset as the reference inter-arrival time required by the algorithm.
§.§ Implementation of Song16
The detection algorithm presented in <cit.> is based on the assumption that CAN messages exhibit a fixed inter-arrival time. However, the experimental analysis presented in <ref> demonstrate that this assumption does not hold on our dataset, where more frequent messages have a higher deviation from the mean value. We remark that this phenomena is well known in real CAN buses implemented in modern vehicles, also other real datasets exhibit the same behavior, due to possible delays and re-transmissions in CAN bus segments that have a relatively high usage (about 50% or higher). This is also acknowledge by producers of automotive ECUs, that consider deviations up to 15% of the reference cycle time to be within the normal working parameters.
While these assumptions are included in the original paper, it appears that they are not necessary for the proposed detection algorithms.
We recall that includes two different algorithms targeting message injection and denial of service, respectively. The algorithm for detecting message injection appears to be identical to , hence we reuse the same implementation.
The algorithm for detecting DoS attacks is clearly explained in the original paper, hence we produced a reference implementation that fully complies with the description provided by the authors.
§.§ Implementation of Moore17
In the original work presenting , the authors assumed that the first 15 seconds of each trace is unaltered, and they used the first 5 seconds for the definition of the detection model. However, in the infected traces composing the Ventus dataset the attacks are generated starting at the beginning of each trace, as described in Section <ref>.
Since the Ventus dataset contains a clean dataset composed by more than 90 minutes of clean CAN traffic, we used the first 5 seconds of the traces composing the clean dataset for training . However, in the OTIDS dataset the attacks are generated after 250 seconds of normal traffic, hence we trained the detection model on the first 5 seconds of the infected trace.
Following the analysis presented in <cit.>, we also compared the maximum variance of the cycle time of each message ID with respect to the expected cycle time ct on the first 5 seconds of the traces composing the datasets. Results are presented in Figure <ref> and <ref> for the Ventus and OTIDS dataset, respectively.
From this analysis it is possible to notice that all the IDs exhibit a small variance from the expected cycle time in the first 5 seconds of the traces of the clean section of the datasets, with the exception of ID 0x405 for the Ventus dataset and of IDs 0x18 and 0x50 for the OTIDS dataset.
We remark that these results are comparable to the ones presented in <cit.> on the dataset at their disposal.
We recall that raises an alert if the time between consecutive messages with the same ID Δ_t is lower or grater than a threshold (see Section <ref> for additional details).
To prevent false positives, raises an anomaly only if three consecutive alerts are generated for the same ID. This design choice prevents a direct comparison of false positive and false negative rates with the other algorithms.
To allow performance comparisons without penalizing , given an anomaly we classify all the injected messages that generated one of the three consecutive alerts as a true positive. If a legit message generated one of the alerts required for issuing an anomaly, we do not count that as a false positive. On the other hand, if an anomaly has been raised after three alerts all generated by legit messages, we consider that anomaly as a single false positive. This solution increases the overall detection performance of and makes it comparable to other algorithms in terms of ℱ-measure.
§.§ Implementation of Stabili19
The detection algorithm is based on the assumption that it is possible to create a detection model that raises 0 false positives in the validation phase. To this aim, uses a parameter k for the definition of the valid waiting time for each different ID.
We replicated the tuning of the parameter k following the description provided in <cit.> over both datasets, and the results of this analysis are presented in Figure <ref> and <ref> for the Ventus and OTIDS datasets, respectively.
At the end of this training process on the Ventus dataset, the minimum value of the parameter k that does not generate false positives on the clean dataset is equal to 2 (for 16 message IDs), while the maximum value is 8 (for a single message ID). On the OTIDS dataset however, all the IDs do not generate any false positive by using a value of k = 2.
Moreover, focusing on the Ventus dataset and comparing the results presented in Figure <ref> with the ones presented in Figure <ref> it is possible to notice that messages with lower cycle time values exhibit a higher deviation from the expected cycle time and require a higher value of the parameter k to achieve 0 false positives. This interesting result is explained by considering that messages with lower cycle time (i.e. appearing on the CAN bus more frequently than the others) are more prone to delays and transmission errors over a real CAN bus that has a load comparable or higher to 50%, hence they require higher tolerance by any detection mechanism based on the analysis of the timings.
§.§ Implementation of Olufowobi20
The detection algorithm is composed by the training and the detection phase, which are presented in Algorithm 1 and 2 of the original paper, respectively.
The algorithm is based on the assumption that the real model and its parameters are unknown and unavailable, since these information (specifically the precise message periods) is generally not disclosed by manufacturers.
For this reason considers a detection model and based on parameters derived from observations of the CAN communications. Hence, the training phase of is focused on the definition of the parameters required for the detection process: the estimated message period P̃_̃ĩ=f_i,min and the release jitter J_i=f_i,max-f_i,min.
For the estimation of these parameters the transmission time C_i is required, which is defined as the time required to transmit a message on the CAN bus. However, we remark that it is not possible to identify the value of C_i by observing normal CAN communications but should be learned by measuring the transmission time on the ECU transmitting a CAN message.
Since it is not possible to identify the value of C_i from the clean dataset, we used a worst-case estimation of this parameter in the training process. For the worst-case estimation of C_i we used the clean dataset to reconstruct the sequence of bits composing the CAN data frame to calculate the size of the message size(m_i). Than the value of C_i is defined as C_i = max(size(m_i)) / bitrate, where max(size(m_i)) is the maximum reconstructed bit size of CAN messages with the same ID i and bitrate is the nominal bitrate of the CAN. In our experiments, the nominal bitrate of both datasets is 500kbps.
Another issue raised during the implementation of is related to the availability of the value of the precise message period P_i. We remark that P_i (as described in the original work) belongs to the list of parameters not available for the definition of the detection model. However, in the description of the detection process (Algorithm 2 of the original work) the precise message period P_i is used as one of the parameters of the algorithm. Since the precise message period P_i is different from the estimated message period P̃_̃ĩ (which is computed in the training phase), in our implementation we defined P_i as the cycle time ct.
Moreover, another issue raised in the implementation of the detection algorithm we discover that the variable k (one of the inputs of the detection function) is modified (in Algorithm 2, row 7) but the modified value is used. This variable is used for the evaluation of the arrival time window of the next message, and is crucial for the detection task. In our implementation of we modified the detection algorithm by returning the value of k in case is modified.
Despite the aforementioned issues raised during the implementation of , in its experimental evaluation we discovered that the latter modification introduced a significant downside that afflicts the detection performance of . In particular, the novel issue is related to the modification of the value of k in case a message is marked as an anomaly. In this scenario, the algorithm will update the value of k and, in case of injection attacks, all the following messages are identified as anomalies since the arrival time window used for the detection task is out of sync with the tested message, resulting in thousands of false positives. To overcome this issue, we introduced an “update protection mechanism” on the value of k that triggers when a message is considered anomalous, reverting the value of k to its previous value.
§ EXPERIMENTAL EVALUATION
In this section, we present the experimental evaluation of the detection performance of the algorithms against the datasets (and their respective threat models) presented in Section <ref>.
To compare the detection performance of the different algorithms against the described attack scenarios we consider the as the key performance indicator. The is the harmonic mean of the precision and the recall of the detection performance. This indicator is commonly used in comparing intrusion detection systems. It ranges from 0 to 1, where values close to 0 denote the inability to detect any anomaly and values close to 1 denote the ability of the algorithm to detect all the anomalies with low false positives. The perfect detection algorithm exhibiting 100% precision and recall has equal to 1.
§.§ Detection results against the Ventus dataset
§.§.§ Message injection detection comparison
The performance evaluation of the algorithms against the message injection attack is presented in Figure <ref>. Figure <ref> is composed by 5 different subplots representing different attack scenarios. Top to bottom, these attack scenarios are the injection of 1, 10, 25, 50, and 100 messages each second.
The y-axis of each subplot of Figure <ref> shows the evaluated with the detection algorithms, while the x-axis represents the injected message ID, ordered by ascending cycle time. The 10 reference message IDs used in these experiments are the ones presented in <ref> of Section <ref>, as they represent messages with different cycle times.
The detection results of each algorithm against a given combination of ID and injection frequency is represented as a boxplot. We used this graphical indicator since it allows to summarize in a concise representation the detection results achieved against the same attack (same ID and injection frequency) across different infected traces. For readability purposes, we only show the main components of the boxplot: the 10_th, 25_th, 50_th (median), 75_th, and 90_th percentiles.
As an example, the top subplot of Figure <ref> refers to injection attacks performed with an injection frequency of 1 message per second. This subplot is divided into 10 “columns”, each referring to the injection of a different message ID. The first of these columns refers to the injection of message ID 0x10, and includes 6 boxplots drawn in different colors and having a small horizontal offset to improve readability. The first boxplot (red) summarizes the detection performance of , the second (blue) of , the third (dark orange) of , the fourth (cyan) of and , the fifth (green) of , while the last one (purple) of . We remark that is not included in this set of experiments since it is designed to detect only missing messages, and it cannot be applied against injection attacks. To highlight the trend related to the detection performance depending on the message cycle time, we also draw solid lines connecting the median values of all the boxplots related to the same algorithm.
We recall that is designed to support only messages with a cycle time lower than 50ms (see Section <ref> for additional details), hence this algorithm is only applicable to the first 4 message IDs having the lower cycle times among the 10 considered in our experiments. To visually represent the inapplicability of to the last 6 IDs we draw a blue × instead of the boxplot.
From the analysis of the results shown in Figure <ref> it is possible to notice different trends by either focusing on the comparison of the detection performance against the injection of the same volume of messages with different IDs (“horizontal” analysis) or by focusing on the difference between the performance against the increasing injection rate of the same message ID (“vertical” analysis). In both cases however, we remark that our implementation of is not able to detect any anomaly in the considered attack scenarios, since the injection of messages does not introduce a significant deviation from the normal anomaly score used by this algorithm.
With the “horizontal” analysis it is possible to notice that by using and () the overall detection performance of the algorithms increases by simulating the attack on messages with higher cycle times, while the overall trend for is the opposite. By focusing on the results of and () algorithms with a “vertical” analysis it is possible to notice that achieves higher detection performance against the injection of a low volume of messages (≤ 10 messages per second), while for injection frequency of at least 25 messages per second () converge to higher also against the injection of messages with low cycle times. Both algorithms are limited in their detection against the injection of the message with ID 0x10 (i.e. the message with the lowest cycle time in our dataset). We remark also that () is able to achieve a of 1.0 against different injection scenarios (the top 2, 4, 6, and 8 messages with the highest cycle times against the injection of 10, 25, 50, and 100 messages per second, respectively), while achieves a perfect only in a subset of these scenarios (the top 1, and 3 messages with the highest cycle times against the injection of 50, and 100 messages per second, respectively).
By focusing on with a “vertical” analysis however it is possible to notice that the overall detection performance of the algorithm increases by increasing the injection frequency, despite having high variance across different tests.
The detection results achieved by however are completely different and require a dedicated analysis. As presented in Figure <ref>, by comparing the detection performance of against the different injection frequencies it is not possible to identify any trend with both “horizontal” and “vertical” analysis.
We recall that uses a false-positive prevention system which raises an anomaly only after 3 consecutive alerts, as described in Section <ref>.
As an example, consider the case of injection of a single message each second using the message with the lowest cycle time (ID 0x10). In this scenario the detection performance of are extremely low since the injected message is right after or just before another valid message, thus resulting in the generation of up to 2 alerts, that are not enough to cause the generation of an anomaly.
However, messages with higher cycle times have a smaller margin of error (comparing the experimental results for the definition of the parameter m discussed in Section <ref>), hence it is possible to detect anomalies more frequently.
One could expect that by increasing the injection frequency the overall detection performance should also increase, however the experimental evaluation demonstrates that for high enough injection frequencies starts to generate an increasing number of false negatives (i.e. missed anomalies). To better explain this counter-intuitive behavior, we refer to an example. Consider the scenario of the injection of the message 0x581, which has a ct = 1000ms. For this message, the value of the parameter m is 20ms, hence the time required for the identification of an alarm for this message is approximately 152ms (see Section <ref> for additional details on how detects anomalies). Since in our dataset the injection is simulated by equally distributing the injected messages on the interested time window, by injecting messages with a frequency of 10 messages per second we are injecting a message every 100 milliseconds, thus the time required for the identification of an alert of 152ms is higher than the cycle time between two consecutive messages, increasing the number of false negatives. This explains the two different trends that can be observed by comparing the “vertical” and “horizontal” analysis. By targeting messages with higher cycle time, the overall detection performance of increases. However, by increasing the injection frequency it is possible to increase the detection performance against the injection of messages with lower cycle times, despite there is a huge increment of false negatives for messages whose time required for the identification of a single alert is higher than the time between two consecutive injected messages.
Finally, by analyzing the detection performance of “horizontally” it is possible to notice that the detection performance of the algorithm increases by increasing the cycle time of the injected message, in a trend similar to the one already observed for and (). However, with the “vertical” analysis it is possible to notice that the detection performance are not influenced by increasing the frequency of the injection. We also remark that the detection performance of against the injection of the message with the highest cycle time are always equal to 0.
§.§.§ Message removal detection comparison
To compare the detection performance of the only two algorithms supporting the detection against the message removal attack we present the results by means of percentage of detected anomalies, i.e. the number of alarms raised by the two detectors compared to the number of removed messages. We remark that it is possible to use this detection metric only in case the training of the detectors is based on a zero false positives approach, since it is impossible to distinguish between true positives and false positives otherwise.
The detection results of the and against the message removal attack are presented in Figure <ref>, where the percentage of detected anomalies (y-axis) for each removed message ID (x-axis) is shown.
The percentage of detected anomalies is presented using the box-plots. The detection performance of the two algorithms against this attack scenario are extremely consistent across the different missing IDs, hence it is nearly impossible to notice the different percentiles since they overlap with the median. We recall that is designed to support only messages with a cycle time below 50 milliseconds, hence it is possible to use it only against the removal of the first 4 message IDs.
From the analysis of the results depicted in Figure <ref> it is clear that our implementation of is not able to reliably detect anomalies also against this attack scenario, since the removal of messages from normal CAN communications does not introduce significant deviations of the anomaly score used by .
The detection performance achieved by are close to 100%, although this ideal value is never reached. This aspect has already been addressed in <cit.> and is related to the introduction of the valid waiting time required to achieve zero false positives in the validation process.
§.§ Detection results against the OTIDS dataset
In this section we present the performance evaluation of the detection algorithms against the OTIDS dataset. To perform an evaluation of the detection algorithms, a labeled dataset is required, and since the OTIDS dataset is not labeled, we recreated the labels by following the description of the attacks. The three attack scenarios included in the OTIDS dataset are described as follows:
* Fuzzy attack: the attack starts after 250 seconds of normal traffic, and includes both normal and injected messages with 8 different message IDs: 0x153, 0x164, 0x1F1, 0x220, 0x2C0, 0x4B0, 0x4B1, and 0x5A0.
* Denial-of-Service attack: the attack starts at the beginning of the trace, and injects messages with ID 0x0.
* Impersonation attack: the attack starts after 250 seconds of normal traffic, removes all messages with ID 0x164 and inject messages with the same message ID mimicking its normal cycle time.
Following the attack description, we remark that it is not possible to distinguish between normal and injected messages in case of the fuzzy attack scenario, hence we can not use that particular attack scenario in our experiments, with only the DoS and the impersonation attack scenarios available for performance evaluation. However, we remark also that in the DoS attack scenario the injected message ID is not found in the training trace, thus making the detection task trivial by simply checking if there is a reference for the ID in the detection model. Hence, the only attack scenario from the OTIDS dataset that can be used for performance evaluation is the impersonation attack.
The comparison of the performance evaluation of the algorithms against the impersonation attack scenario of the OTIDS dataset is shown in Figure <ref>. Figure <ref> shows, for each detection algorithm, the ℱ-measure evaluated on the impersonation attack scenario.
The results shown in Figure <ref> shows that in this particular attack scenario the algorithm that is able to achieve the highest detection performance are and , while the other detection techniques struggles to achieve a ℱ-measure close to 0.5, with failing to detect a single anomaly (ℱ-measure equal to 0). However, we remark that these results are relative to a single test case, and that by changing the injected message or its injection frequency the overall detection results might vary significantly, as observed previously in Section <ref>.
In the OTIDS impersonation attack scenario, the impersonated message ID 0x164 is one of the most frequent messages, with a cycle time of 9 milliseconds. The impersonation attack requires the removal of the target message after 250 seconds, and the injection of a single message with the same message ID every 9 milliseconds, mimicking the original frequency. In these conditions, we remark that the impersonation attack of the OTIDS dataset is actually a particular scenario of masquerading attack, in which the injected CAN messages substitute the normal ones.
However, following the performance evaluation of the tested algorithms we conducted manual analysis of the detection results to understand the behavior of the detection algorithms. The first interesting results from this analysis is that , , and raise anomalies after the start of the attack. By analyzing the beginning of the attack simulation, we discovered that in the first 18 milliseconds following the start of the attack there are 4 different messages with ID 0x164 instead of the expected 2. This implies that in this time window the attack scenario is closer to the message injection attack considered in the threat model of Ventus, allowing , , and to detect anomalies.
The second interesting result is that both and raised the majority of their alarms after approximately 50 milliseconds since the start of the attack, being able to identify manipulated messages as anomalous. This implies that the evaluated detection performance of these two algorithms are to be considered as against the impersonation/masquerade attack scenario and not relative to the message injection attack. This interesting result might also be the cause for the low detection performance of both algorithms against a real message injection attack scenario as presented earlier in Section <ref>. As a final remark, we highlight that the detection performance of are heavily affected by the values of its configuration parameter. However, we were not able to identify any combination of values that allows to identify anomalies against the attacks on the OTIDS dataset.
§ CONCLUSIONS
This paper contributes to the state-of-the-art by (I) surveying and implementing eight different detection algorithms based on CAN message timing analysis; by (II) publicly releasing their reference implementations; and by (III) testing the implemented algorithms against two different datasets, to present a detailed comparison of the detection performance of the analyzed detection algorithms against the same threat model and using the same detection metrics (and detection rate).
The novel dataset used for our experimental evaluation <cit.>, which is composed by more than 90 minutes of training data and more than 400 CAN traces containing different labeled attacks, is publicly available to advance current solutions.
With respect to the current state-of-the-art, this work presents an empirical and unbiased comparison of timing-based detection algorithms against the threat model of two different datasets, by addressing reproducibility of the results and highlighting the limitations of the dataset already publicly available for the performance evaluation of CAN anomaly detection algorithms. All the implemented algorithms and the dataset used in our experimental evaluation are publicly available to enable further improvements on this research topic.
Our main motivation is the impossibility of a direct comparison of similar proposals, due to inherent limitations of the literature. Authors usually do not release the source code of their implementations. Detection algorithms are described with an insufficient level of detail, thus requiring additional assumptions for their implementation that might have a considerable impact on detection performance. Different algorithms are tested on different private datasets, and the same attack is often implemented in different ways. These issues make it impossible to demonstrate an advancement over the state of the art, and even to compare novel proposals against it. This work solves all the aforementioned issues, thus establishing a fair, transparent and open-source baseline that can be used by all researchers and industry practitioners.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.06067v1 | 20230712103543 | Cavity-mediated entanglement of parametrically driven spin qubits via sidebands | [
"V. Srinivasa",
"J. M. Taylor",
"J. R. Petta"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall"
] |
[email protected]
Department of Physics, University of Rhode Island, Kingston, RI 02881,
USA
Joint Quantum Institute, University of Maryland, College Park, Maryland
20742, USA
Joint Center for Quantum Information and Computer Science, University
of Maryland, College Park, Maryland 20742, USA
National Institute of Standards and Technology, Gaithersburg, Maryland,
20899, USA
Department of Physics and Astronomy, University of California–Los
Angeles, Los Angeles, California 90095, USA
Center for Quantum Science and Engineering, University of California–Los
Angeles, Los Angeles, California 90095, USA
We consider a pair of quantum dot-based spin qubits that interact
via microwave photons in a superconducting cavity, and that are also
parametrically driven by separate external electric fields. For this
system, we formulate a model for spin qubit entanglement in the presence
of mutually off-resonant qubit and cavity frequencies. We show that
the sidebands generated via the driving fields enable highly tunable
qubit-qubit entanglement using only ac control and without requiring
the qubit and cavity frequencies to be tuned into simultaneous resonance.
The model we derive can be mapped to a variety of qubit types, including
detuning-driven one-electron spin qubits in double quantum dots and
three-electron resonant exchange qubits in triple quantum dots. The
high degree of nonlinearity inherent in spin qubits renders these
systems particularly favorable for parametric drive-activated entanglement.
We determine multiple common resonance conditions for the two driven
qubits and the cavity and identify experimentally relevant parameter
regimes that enable the implementation of entangling gates with suppressed
sensitivity to cavity photon occupation and decay. The parametrically
driven sideband resonance approach we describe provides a promising
route toward scalability and modularity in spin-based quantum information
processing through drive-enabled tunability that can also be implemented
in micromagnet-free electron and hole systems for spin-photon coupling.
Cavity-mediated entanglement of parametrically driven spin qubits
via sidebands
J. R. Petta
August 12, 2023
===============================================================================
§ INTRODUCTION
Scaling to many-qubit systems represents a current challenge in the
implementation of quantum information processing <cit.>
due to the highly complex electronics required to control even a few
qubits in most realizations, combined with the need to minimize dissipation
of quantum information into the environment. One approach to addressing
this challenge is provided by modularity <cit.>,
which enables scalability by linking existing, relatively well-controlled,
and locally optimized few-qubit modules via robust long-range interactions.
For semiconductor spin qubits, which represent a promising quantum
information processing platform <cit.>,
such long-range interactions can be achieved by coupling spins to
photons in a microwave cavity using the approach of circuit quantum
electrodynamics (cQED) <cit.>.
Building on the promise of long coherence times for spins in silicon
<cit.>,
strong spin-photon coupling <cit.>
as well as coherent photon-mediated interaction of two single-electron
silicon spin qubits <cit.> have now
been achieved. While these results provide a path to scalability for
spin-based quantum information processing, tuning and scaling challenges
remain for applying this approach to more than two qubits. For resonant
cavity-mediated qubit-qubit coupling <cit.>, all qubit
frequencies must be tuned into simultaneous resonance with the cavity
frequency, and the micromagnets required in silicon for achieving
sufficient spin-charge coupling must be precisely positioned for each
qubit. In the standard dispersive approach to cavity-mediated coupling
<cit.>,
these constraints are partially relaxed as the cavity virtually mediates
the interaction and its frequency is distinct from those of the qubits.
However, an entangling interaction between the two qubits still requires
their (cavity-shifted) frequencies to be tuned into mutual resonance.
To allow for increased flexibility in achieving qubit-qubit entanglement,
off-resonant coupling approaches have been developed for cQED systems
with superconducting qubits <cit.>
and related two-qubit gates have also recently been explored in the
context of semiconductor qubits <cit.>.
These approaches enable qubits to be fixed at optimal operation points
where decoherence is minimized, while interactions are activated via
external microwave driving of either the coupling or one or more qubits.
In this work, we consider a pair of qubits based on electron spins
in quantum dots that interact via microwave photons in a superconducting
cavity, and that are also parametrically driven by classical external
electric fields (Fig. <ref>). For this system, we
formulate a model for entanglement between the two qubits that incorporates
mutually off-resonant qubit and cavity frequencies and makes use of
the Mollow triplet sidebands of the driven qubits <cit.>
to effectively provide multiple qubit transition frequencies for cavity-mediated
coupling. This approach enables highly tunable qubit-cavity photon
interactions and qubit-qubit entanglement using ac control through
the applied driving fields, without requiring dc tuning of the qubit
frequencies. The spin qubits can therefore be fixed at optimal operation
points that allow for maximal coherence times. The model we develop
can be mapped to a variety of qubit types. Here, we illustrate this
mapping for both single-electron spin qubits in double quantum dots
<cit.> and three-electron resonant exchange
(RX) qubits in triple quantum dots <cit.>
in the driven resonant regime <cit.>.
We determine common resonance conditions for the two driven qubits
and the cavity and identify parameters for implementing multiple entangling
gates. In contrast to the sideband-based gates obtained for the driven
resonant regime in prior work <cit.>,
entangling gates do not require sequences of multiple sideband pulses
and additionally exhibit suppressed sensitivity to cavity photon occupation,
as we verify through two-qubit gate fidelity calculations. The enhanced
spectral flexibility inherent in the approach we describe provides
a promising route toward scalability and modularity in spin-based
quantum information processing through drive-enabled tunable entanglement
that can also be implemented in micromagnet-free systems for spin-photon
coupling <cit.>.
§ THEORETICAL FRAMEWORK
We now develop a general theoretical description of the sideband-based,
cavity-mediated entangling gates between driven qubits discussed in
this work. To illustrate the approach in more concrete terms, we consider
two specific types of quantum dot-based electron spin qubits for which
strong spin-photon coupling has been realized (Fig. <ref>
and Appendix <ref>): (1) One-electron spin qubits
in double quantum dots with micromagnets for spin-charge coupling
<cit.>; and
(2) three-electron RX qubits in triple quantum dots, which couple
directly to photons through their intrinsic electric dipole moments
<cit.>.
Both types of qubits, as well as several other classes of spin qubits
such as two-electron singlet-triplet qubits <cit.>,
quantum dot hybrid qubits <cit.>, and hole spin
qubits in silicon and germanium double quantum dots <cit.>,
can be manipulated electrically by parametrically driving the detuning
ϵ_j between the (outer) two dots of qubit j <cit.>.
We write this driving field as
ϵ_j(t)≡ϵ_0,j+2ℱ_jcos(ω_j^dt+ϕ_j^')
in terms of a dc operation point ϵ_0,j, as well as the
amplitude 2ℱ_j, frequency ω_j^d, and phase
ϕ_j^' of an ac drive that sinusoidally modulates the
detuning about ϵ_0,j as a function of time. Here, we fix
the qubits at the “sweet spot” detuning operation points ϵ_0,j=0
for j=1,2, where the first derivative of the qubit frequency vanishes
for both one-electron spin and symmetric RX qubits, enabling leading-order
protection from charge noise <cit.>.
The operation point ϵ_0,j=0 also maximizes the effective
spin-photon coupling strength for one-electron spin qubits <cit.>,
while optimization of the RX qubit-photon coupling strength involves
multiple parameters (see Appendix <ref> for
more details). In what follows, all sums are over the qubit index
j=1,2 unless otherwise noted.
As shown in Appendix <ref>, for both one-electron
spin qubits and RX qubits driven according to Eq. (<ref>),
the system Hamiltonian including the cavity and driving fields with
ϵ_0,j=0 can be written in the qubit basis as (ħ=1)
H_p ≡ω_ca^†a+∑_jω_j/2σ_j^z+∑_jg_jσ_j^x(a+a^†)
+∑_j2Ω_jcos(ω_j^dt+ϕ_j)σ_j^x,
where a^† and a are photon creation and annihilation
operators for the fundamental mode of the cavity with frequency ω_c,
σ_j^α with α=x,y,z and σ_j^z≡|1⟩_j⟨1|-|0⟩_j⟨0|
are Pauli operators and ω_j is the transition frequency
for spin qubit j, g_j is the strength of the coupling between
spin qubit j and photons in the fundamental cavity mode, and 2Ω_j
is the effective driving amplitude for spin qubit j. We see from
Eq. (<ref>) that, for both types of qubits, the detuning drive
[Eq. (<ref>)] acting on the electron charge degrees
of freedom is translated into an effective transverse (σ_x)
drive on the qubit. Note that we have redefined the phases ϕ_j
of the drives with respect to Eq. (<ref>) to take into
account sign changes occuring in the derivation of H_p (see Appendix
<ref> for details).
To derive sideband-mediated entangling interactions from Eq. (<ref>),
we first transform H_p to a frame rotating at the frequencies
of both drives and the cavity via
U_1=e^-it(ω_ca^†a+∑_jω_j^dσ_j^z/2),
which is equivalent to an interaction picture for resonant driving
of the qubits, ω_j^d=ω_j. Defining the cavity-drive
detunings Δ_j≡ω_c-ω_j^d and the qubit-drive
detunings δ_j≡ω_j-ω_j^d, and making
a rotating wave approximation for |Δ_j|≪ω_c+ω_j^d,2ω_j^d,
we drop rapidly oscillating terms ∼ e^± i(ω_c+ω_j^d)t,∼ e^±2iω_j^dt
and find
H_p^ rf ≡ U_1^†H_pU_1-iU_1^†U̇_1
≈ H_0+V(t),
H_0 ≡∑_jδ_j/2σ_j^z+∑_jΩ_j(e^-iϕ_jσ_j^++e^iϕ_jσ_j^-),
V(t) ≡∑_jg_j(e^-iΔ_jtσ_j^+a+e^iΔ_jtσ_j^-a^†),
where we assume g_j≪2Ω_j and take V(t)
as a time-dependent perturbation to the other terms in Eq. (<ref>).
We initially take the limit g_j→0 and diagonalize H_0.
Choosing the phases of the driving fields to be ϕ_j=0 for
j=1,2 to simplify the analysis, we find
H_0 =∑_jδ_j/2σ_j^z+∑_jΩ_jσ_j^x
≡∑_jW_j/2(cosθ_jσ_j^z+sinθ_jσ_j^x),
where we have for convenience re-expressed H_0 in terms of W_j
and θ_j in the last line of Eq. (<ref>), which serve
to redefine the Pauli operators σ_j^z≡|e⟩_j⟨e|-|g⟩_j⟨g|
and σ_j^x in terms of the dressed qubit basis {|e⟩_j,|g⟩_j} .
We can then diagonalize this Hamiltonian via a rotation around the
y axis for each qubit, described by
U_q=e^-i∑_jθ_jσ_j^y/2
where tanθ_j=2Ω_j/δ_j, which yields
H_0,q ≡ U_q^†H_0U_q
=∑_jW_j/2σ_j^z
with the dressed qubit frequencies W_j≡√(δ_j^2+4Ω_j^2).
Applying the same rotation U_q [Eq. (<ref>)] to the
qubit-cavity coupling perturbation V(t) then yields
the full Hamiltonian H_q≡ H_0,q+V_q(t) in
the dressed qubit basis, where V_q(t)≡ U_q^†V(t)U_q.
Finally, we transform to a second frame rotating at the dressed qubit
frequencies W_j for the driven qubits via
U_2=e^-i∑_jW_jσ_j^z/2,
which is equivalent to the interaction picture with respect to H_0,q
[see Eq. (<ref>)]. In this frame, we find the Hamiltonian
V_I(t) ≡ U_2^†H_qU_2-iU_2^†U̇_2
=A(t)a^†+A^†(t)a.
In Eq. (<ref>), we have defined
A(t) ≡∑_jA_j
≡∑_jg_j/2e^iΔ_jt[sinθ_jσ_j^z-(1-cosθ_j)e^iW_jtσ_j^+.
+.(1+cosθ_j)e^-iW_jtσ_j^-],
which represents a sum of time-dependent qubit operators. We note
that A(t), and therefore V_I(t), have
the three characteristic frequencies Δ_j and Δ_j± W_j
for qubit j, which correspond to the Mollow triplet frequencies
<cit.> consisting of the center and sideband frequencies
of each driven qubit. In the present case, these frequencies are uniformly
shifted by the cavity frequency ω_c due to the rotating-frame
transformation U_1 [Eq. (<ref>)]. Driving the qubits
on resonance such that ω_j^d=ω_j gives δ_j=0
and W_j=2Ω_j, so that sinθ_j=1 and cosθ_j=0.
In this case, Eq. (<ref>) reduces to
A_r(t) ≡∑_jg_j/2[e^iΔ_jtσ_j^z-e^i(Δ_j+2Ω_j)tσ_j^+.
+.e^-i(Δ_j-2Ω_j)tσ_j^-],
and we find that the three characteristic frequencies for driven qubit
j become Δ_j,Δ_j±2Ω_j.
In order to determine the gate operations generated by the time-dependent
Hamiltonian V_I(t) in Eq. (<ref>), we use
the Magnus expansion <cit.> up to second order to approximate
the time evolution operator. For g_j≪ W_j such that λ≡ g_j/W_j
is a small parameter, we write U(τ)≈ e^-iH_ effτ,
where H_ eff(τ)=λH̅_1(τ)+λ^2H̅_2(τ)
represents an effective Hamiltonian to O(λ^2)
with
λH̅_1(τ) ≡1/τ∫_0^τdt V_I(t),
λ^2H̅_2(τ) ≡1/2iτ∫_0^τdt∫_0^tdt^' [V_I(t),V_I(t^')].
We first consider the term λH̅_1(τ).
From Eqs. (<ref>) and (<ref>), we see that the integral
in Eq. (<ref>) is evaluated via the corresponding integrals
of A(t) and its Hermitian conjugate. The first-order
term in the effective Hamiltonian is then given by
λH̅_1(τ) =∑_jg_j/2[f(Δ_j)σ_j^z-f(Δ_j+W_j)σ_j^+.
+.f(Δ_j-W_j)σ_j^-a^†+ H.c.],
where we have defined the integral
f(μ) ≡1/τ∫_0^τdt e^iμ t
with frequencies μ=Δ_j,Δ_j± W_j. The first-order
term λH̅_1(τ) describes the direct (∼ g_j)
interaction of each qubit with the cavity and includes both red (∼σ_j^+a,σ_j^-a^†)
and blue (∼σ_j^+a^†,σ_j^-a) sideband
terms. These interaction terms can be used to generate entanglement
via sequences of multiple sideband pulses <cit.>.
We now set Δ_j=p_jη and W_j=q_jη with p_j,q_j
integers and η≡2π/τ. We also assume μ≠0. In
this case, we find that μ=r_jη with r_j=p_j,p_j± q_j≠0
also an integer, so that all integrals f(μ) in Eq. (<ref>)
vanish and λH̅_1(τ)=0. Thus, when Δ_j
and W_j are both integer multiples of the same frequency η,
we can completely eliminate the first-order sideband interaction terms
[Eq. (<ref>)] from H_ eff(τ).
To O(λ^2), the effective interaction Hamiltonian
generating the gate operation is now given entirely by the second-order
term
H_ eff(τ)=λ^2H̅_2(τ)=1/2iτ∫_0^τdt∫_0^tdt^'[V_I(t),V_I(t^')].
Using Eqs. (<ref>) and (<ref>), we can write the commutator
in Eq. (<ref>) as
[V_I(t),V_I(t^')] =[V_I,V_I^']_1+[V_I,V_I^']_2,
[V_I,V_I^']_1 ≡∑_j{[A_j,A_j^']a^†2+[A_j^†,A_j^'†]a^2.
+.([A_j,A_j^'†]+[A_j^†,A_j^'])a^†a} ,
[V_I,V_I^']_2 ≡ A^†A^'-A^'†A,
where [V_I,V_I^']_1 contains terms involving
only one-qubit operators and [V_I,V_I^']_2
contains all two-qubit operator terms along with some additional one-qubit
operator terms, as we show below. We note in particular that [V_I,V_I^']_2
does not involve any photon operators.
The form of the qubit-qubit interaction terms in the effective Hamiltonian,
and thus the generated two-qubit gate, is determined by [V_I,V_I^']_2=A^†A^'-A^'†A.
For j,k=1,2, we can write this commutator as
[V_I,V_I^']_2 =A^†A^'-A^'†A
=∑_j,k(A_j^†A_k^'-A_j^'†A_k)
=∑_j(A_j^†A_j^'-A_j^'†A_j)
+A_1^†A_2^'-A_1^'†A_2+A_2^†A_1^'-A_2^'†A_1.
The terms A_j^†A_j^'-A_j^'†A_j
in [V_I,V_I^']_2 lead to one-qubit terms
in H_ eff(τ), while the qubit-qubit interaction
terms arise entirely from the last line of Eq. (<ref>).
In Appendix <ref>, we show that for Δ_j,W_j≠0
and W_j≠|Δ_j|,|2Δ_j|,
the complete one-qubit contribution to H_ eff(τ)
appearing at second order and originating from [V_I,V_I^']_1
and [V_I,V_I^']_2 reduces to
Λ =-∑_jg_j^2[δ_j^2+2Δ_jδ_j+W_j^2/2W_j(Δ_j^2-W_j^2)]σ_j^z(a^†a+1/2).
These terms represent parametric driving-induced dispersive shifts
that can be tuned via the drive amplitudes and frequencies, and are
well defined in the absence of decay provided W_j≠|Δ_j|.
Such shifts can be harnessed for drive-tunable qubit measurement <cit.>.
Two specific cases of interest are δ_j=0 and δ_j≠0,
corresponding to resonant and off-resonant driving of the qubits,
respectively. For δ_j=0 and W_j≠0, we find from
Eq. (<ref>) that Λ≠0 and the dispersive shift
terms persist. In this case, the qubit frequencies are W_j=2Ω_j
and the dispersive shift terms in Eq. (<ref>) become
=∑_jχ_jσ_j^z(a^†a+1/2),
χ_j ≡-g_j^2Ω_j/Δ_j^2-4Ω_j^2.
As we show in Appendix <ref> for multiple example
cases, the effects of on the dynamics
for δ_j=0 can effectively be eliminated in specific situations
of interest by an appropriate choice of parameters and operation times.
On the other hand, for δ_j≠0, we can choose Δ_j
and W_j such that Λ=0 (see Appendix <ref>
for further details). The effective Hamiltonian H_ eff(τ)
then consists purely of qubit-qubit interaction terms, which arise
from the terms with j≠ k in Eq. (<ref>).
§ DRIVE-TAILORED ENTANGLING GATES VIA SIDEBAND
RESONANCES
We next focus on the qubit-qubit interaction terms in H_ eff(τ),
which are given by
V_qq(τ) ≡1/2iτ∫_0^τdt∫_0^tdt^'(-A_1A_2^'†+A_1^'A_2^†- H.c.).
Using Eqs. (<ref>) and (<ref>) in Appendix <ref>
to write Eq. (<ref>) in terms of functions h(μ_1,μ_2),
h(μ_2,μ_1), and their complex conjugates, we
can identify resonance conditions μ_1=μ_2 that each give
rise to specific qubit-qubit terms in V_qq(τ).
Since μ_1∈{Δ_1,Δ_1+W_1,Δ_1-W_1}
and μ_2∈{Δ_2,Δ_2+W_2,Δ_2-W_2} ,
there are nine resonance conditions. Each condition corresponds to
resonance between a center or sideband frequency of qubit 1 and a
center or sideband frequency of qubit 2. These conditions and the
corresponding qubit-qubit terms appearing in the effective Hamiltonian
H_ eff(τ) are derived in Appendix <ref>
and summarized in Table <ref>, where we have defined
Δ_j^±≡Δ_j± W_j.
We now specifically consider the case of resonant qubit driving (δ_j=0
or ω_j^d=ω_j), so that W_j=2Ω_j. The
Hamiltonian for this case is given by V_I(t) [Eq. (<ref>)]
with A(t)=A_r(t) as given in Eq. (<ref>).
The characteristic frequencies are therefore Δ_j,Δ_j+2Ω_j,
and Δ_j-2Ω_j, corresponding to center, red sideband,
and blue sideband frequencies, respectively, for driven qubit j
(shifted with respect to the cavity frequency ω_c). Assuming
that we choose parameters such that the effects of the drive-induced
dispersive shift terms in Eq. (<ref>)
can be neglected (see Appendix <ref> for details),
the evolution generated by the effective Hamiltonian H_ eff(τ)
reduces to that generated by the qubit-qubit interaction V_qq(τ)
in Eq. (<ref>). Thus, we consider only the dynamics generated
by V_qq(τ) in what follows.
For each resonance condition, Table <ref> gives the
form of the interaction V_qq≡ V_qq(τ) for
resonant qubit driving (second-to-last column), assuming that the
dressed qubit frequencies W_1 and W_2 satisfy the associated
constraints (third column). These constraints are based on Eq. (<ref>)
and are obtained for each resonance condition by applying the condition
μ_1≠μ_2 to the remaining resonance conditions in Table
<ref>, so that all qubit-qubit interaction terms
except for the specified interaction V_qq vanish (see Appendix
<ref>). We define a corresponding ideal two-qubit
operation generated by V_qq as U_m≡ U(τ_m)=e^-iV_qqτ_m,
where τ_m≡ mτ=2π m/η with m=0,1,2,… represents
the corresponding gate time and is an integer multiple of τ.
By adjusting the drive amplitudes 2Ω_j and frequencies ω_j^d
to tune to a particular resonance condition and set W_1 and W_2
appropriately, it is then possible to select desired qubit-qubit interaction
terms and two-qubit entangling gates. Examples of universal entangling
gates are given in the last column of Table <ref>.
The drives can also be used to switch off a given interaction by tuning
the sidebands away from the corresponding resonance condition. We
emphasize that: (1) multiple two-qubit interaction terms exist even
with mutually off-resonant frequencies (i.e., for j≠ k, ω_j≠ω_k≠ω_c);
and (2) the cavity photon operators a,a^† do not appear
explicitly in V_qq, suggesting suppression of errors due to cavity
photon decay in the sideband-based entangling gate approach we propose
in this work. Our analysis of expected gate performance in Sec. <ref>
quantitatively demonstrates the presence of this suppressed cavity
photon sensitivity.
To more concretely illustrate the approach, we now consider the effective
qubit-qubit interaction and entangling gates generated between resonantly
driven qubits (δ_j=0, W_j=2Ω_j) for the specific
resonance conditions 7 and 9 in Table <ref>. First,
we consider Δ_1^-=Δ_2^- (resonance condition
7), which is equivalent to Δ_1-W_1=Δ_2-W_2. As
described in Appendix <ref>, the resonance
condition can also be written as ω_1+2Ω_1=ω_2+2Ω_2
for resonantly driven qubits and describes resonance between the blue
sidebands of both qubits [Fig. <ref>(a)]. Setting
Δ_j=p_jη and W_j=q_jη with p_j,q_j
integers leads to the equivalent condition p_1-q_1=p_2-q_2.
We also define the integer w≡ p_1-q_1=p_2-q_2 such
that Δ≡Δ_1^-=Δ_2^-=wη. From Table
<ref>, the constraints associated with resonance
condition 7 are W_1≠ W_2,2W_2,W_2/2. Assuming these
constraints are satisfied, the qubit-qubit interaction takes the form
V_qq=-𝒥(σ_1^+σ_2^-+σ_1^-σ_2^+)
with coupling strength 𝒥≡ g_1g_2/4Δ.
This interaction can be used to generate the i SWAP
gate, which together with single-qubit rotations constitutes a universal
set of quantum gates <cit.> and also represents a key
element of recently proposed improvements to the implementation of
quantum error correction using the surface code <cit.>.
Since V_qq is independent of the photon operators a,a^†,
U_m≡ e^-iV_qqτ_m acts nontrivially only on the
qubits and we can work in a subspace of fixed photon number n.
Thus, we now project all states and operators into the subspace defined
by P_n≡|n⟩⟨n|. For notational simplicity, we use
U_m to denote the evolution operators within the n-photon
two-qubit subspace {|ee,n⟩,|eg,n⟩,|ge,n⟩,|gg,n⟩}
in the remainder of this work unless otherwise specified. Defining
Σ_x≡|eg⟩⟨ge|+|ge⟩⟨eg| (where we have
suppressed the photon number state |n⟩ for convenience), we
can write V_qq=-𝒥Σ_x and U_m=e^i𝒥τ_mΣ_x.
In the full two-qubit dressed basis {|ee⟩,|eg⟩,|ge⟩,|gg⟩} ,
U_m takes the form
U_m =([ 1 0 0 0; 0 cos(𝒥_m) isin(𝒥_m) 0; 0 isin(𝒥_m) cos(𝒥_m) 0; 0 0 0 1 ]).
In order to obtain an i SWAP gate U_i SW,
we set τ_m=mτ=π/2𝒥. We choose the initial state
|ψ_i⟩=|eg⟩ for our analysis. To cancel the dynamics
due to the dispersive shift terms in Eq. (<ref>) for
this case, we also set χ_1=χ_2. As shown in Appendix
<ref>, both the generation of the i SWAP
gate and the drive-induced dispersive shift cancellation can be achieved
for resonance condition 7 by choosing parameters that satisfy the
constraints in Eqs. (<ref>) and (<ref>).
These relations are satisfied for multiple sets of parameters. For
the analysis in this work, we choose the set of parameters shown for
resonance condition 7 in Table <ref>. The evolution
time unit in the Magnus expansion becomes τ=2π/η=20 ns,
yielding an i SWAP gate time τ_m=mτ=800 ns.
The ideal gate evolution generated by V_qq at time τ_m
for resonance condition 7 and these parameters approximates well the
numerical evolution directly due to Eq. (<ref>) according
to the time-dependent Schrodinger equation and in the absence of qubit
and cavity decay, with a fidelity F_0≈0.998 (we choose
the subspace with n=0,1,2 and the initial state |ψ_i⟩=|eg,0⟩
for the numerical analysis; see Appendix <ref>).
Resonance condition 9 in Table <ref> is given by
=.
For resonantly driven qubits, this condition can also be written as
ω_1+2Ω_1=ω_2-2Ω_2 (see Appendix <ref>)
and describes resonance between the blue sideband of qubit 1 and the
red sideband of qubit 2 [Fig. <ref>(b)]. For Δ_j=p_jη
and W_j=q_jη, the resonance condition can also be expressed
as p_1-q_1=p_2+q_2. Accordingly, we now define w≡ p_1-q_1=p_2+q_2
such that Δ≡Δ_1^-=Δ_2^+=wη. Assuming
the constraints W_1≠-W_2,-2W_2,-W_2/2
associated with resonance condition 9 are satisfied (note that these
constraints are always satisfied for W_1,2>0), the qubit-qubit
interaction is V_qq=𝒥(σ_1^+σ_2^++σ_1^-σ_2^-)
with coupling strength 𝒥≡ g_1g_2/4Δ.
We again note that V_qq is independent of the photon operators
a,a^† and generates the gate U_m≡ e^-iV_qqτ_m
within the n-photon subspace at time τ_m=mτ. In terms
of Σ_x^'≡|ee⟩⟨gg|+|gg⟩⟨ee|,
we find V_qq=𝒥Σ_x and U_m=e^-i𝒥τ_mΣ_x,
yielding
U_m =([ cos(𝒥_m) 0 0 -isin(𝒥_m); 0 1 0 0; 0 0 1 0; -isin(𝒥_m) 0 0 cos(𝒥_m) ])
in the full two-qubit dressed basis. The gate at τ_m=mτ=-π/2𝒥
is analogous to an i SWAP gate but acts in the subspace
spanned by {|ee⟩,|gg⟩} . We denote this gate,
which we refer to as the double-excitation gate, by U_i DE.
As the gate is related to U_i SW via a rotation of qubit
2, U_i DE together with single-qubit rotations also constitutes
a universal set of quantum gates. For our analysis, we choose the
initial state |ψ_i⟩=|ee⟩ and also set χ_1=-χ_2
to cancel the dynamics due to the dispersive shift terms in Eq. (<ref>)
(Appendix <ref>). Simultaneous generation of
the gate U_i DE and cancellation of the drive-induced dispersive
shifts for resonance condition 9 is possible by choosing parameters
that satisfy the constraints in Eqs. (<ref>) and (<ref>).
As in the case of resonance condition 7, these relations are satisfied
for multiple sets of parameters. Here, we choose the parameters specified
in Table <ref> for resonance condition 9. The evolution
time unit in the Magnus expansion is again τ=2π/η=20 ns,
yielding a gate time τ_m=mτ=200 ns for U_i DE.
Comparison of the ideal gate evolution generated by V_qq at time
τ_m for resonance condition 9 and these parameters with the
numerical evolution directly due to Eq. (<ref>) according
to the time-dependent Schrodinger equation and in the absence of qubit
and cavity decay again yields a fidelity F_0≈0.998 (as
described in Appendix <ref>, we again choose
the subspace with n=0,1,2 along with the initial state |ψ_i⟩=|ee,0⟩
for the numerical analysis).
We therefore find that, for both resonance conditions 7 and 9 and
appropriately selected parameters, the dynamics due to V_I with
cavity photon operators explicitly included are well approximated
by the dynamics generated by just the two-qubit interaction V_qq,
from which cavity photon operators are absent. In the next section,
we show that this absence of explicit cavity photon dependence in
the effective Hamiltonian is manifested in the full dynamics as suppressed
sensitivity of these sideband-based entangling gates to cavity photon
decay.
§ SIDEBAND GATE PERFORMANCE IN THE PRESENCE
OF QUBIT AND CAVITY DECAY
To evaluate the performance of the sideband resonance-based gates
U_i SW and U_i DE and quantitatively illustrate
the reduced dependence of the dynamics on cavity photons, we use a
master equation analysis and numerically calculate the gate fidelity
for resonance conditions 7 and 9 in the presence of qubit and cavity
decay. Here, we assume that dephasing in the original qubit basis
with rate γ_j for qubit j and cavity photon loss with
rate κ are the dominant sources of decoherence, as is relevant
for silicon quantum dots <cit.>. We write the master
equation as <cit.>
ρ̇ =-i[H_p,ρ]+∑_jγ_j/2(σ^zρσ^z-ρ)
+κ/2(2aρ a^†-a^†aρ-ρ a^†a),
with H_p given by Eq. (<ref>). Following steps similar
to those used to obtain the interaction-picture Hamiltonian V_I
[Eq. (<ref>)], we transform the master equation in Eq. (<ref>)
to the interaction picture. Moving to a rotating frame via U_1
[Eq. (<ref>)], making a rotating wave approximation for
|Δ_j|≪ω_c+ω_j^d,2ω_j^d,
choosing ϕ_j=0, applying U_q [Eq. (<ref>)]
to change to the dressed qubit basis, and moving to the interaction
picture via U_2 yields, after setting δ_j=0 for resonant
qubit driving and dropping rapidly oscillating terms ∼ e^±2iW_jt,
ρ̇_I =-i[V_I,ρ_I]+∑_jγ_j/2(σ_j^+ρ_Iσ_j^-+σ_j^-ρ_Iσ_j^+-ρ_I)
+κ/2(2aρ_Ia^†-a^†aρ_I-ρ_Ia^†a),
where ρ_I≡ U_2^†U_q^†U_1^†ρ U_1U_qU_2.
Equation (<ref>) is the master equation describing the
dynamics in the interaction picture. For the numerical calculations,
we again work in the photon subspace with n=0,1,2 and set γ_1=γ_2≡γ
for simplicity. To analyze the effects of qubit and cavity decay on
the performance of the entangling gates, we calculate the fidelity
F(τ_m)≡ Tr[ρ_I^(0)(τ_m)ρ_I(τ_m)],
where ρ_I(τ_m) denotes the final state at
time τ_m for the evolution obtained by numerically integrating
Eq. (<ref>) and ρ_I^(0)(τ_m)
denotes the final state for the ideal evolution given by γ=κ=0.
We calculate this fidelity as a function of γ and κ
for the resonance conditions 7 and 9 using the parameter sets in Table
<ref> and the initial states chosen above for the
ideal gates U_i SW and U_i DE. The initial state
is ρ_I(0)=|ψ_i⟩⟨ψ_i|=|eg,0⟩⟨eg,0|
for resonance condition 7 and ρ_I(0)=|ψ_i⟩⟨ψ_i|=|ee,0⟩⟨ee,0|
for resonance condition 9.
In Fig. <ref>, we plot the error 1-F(τ_m)
for the two resonance conditions and corresponding two-qubit entangling
gates. We find theoretical entangling gate fidelities F>0.995 for
the full range of κ shown (up to κ=100 kHz)
and γ≲1 kHz (γ≲10 kHz)
for resonance condition 7 (9). While γ and κ are varied
over three orders of magnitude in both cases, the error for both resonance
conditions depends more strongly on qubit decay γ than on
cavity decay κ. This reduced dependence on κ is expected
for the dispersive regime, in which the cavity virtually mediates
the qubit-qubit interaction in the absence of direct qubit-photon
interaction, and was also found in the results of Ref. <cit.>
for the dispersive regimes relative to the driven resonant regime
of cavity-mediated coupling. We note, however, that the gates derived
here are based on interactions via sideband resonances, as in the
driven resonant regime.
For resonance condition 7, given by Δ_1^-=Δ_2^-
[Fig. <ref>(a)], the resonant blue sidebands of
the driven qubits are highly detuned from the cavity frequency ω_c,
with Δ/2π=Δ_1^-/2π=Δ_2^-/2π=0.65 GHz
[see Fig. <ref>(a)]. We see that the error varies
over approximately three orders of magnitude with γ but over
less than one order of magnitude with κ. The suppressed sensitivity
to κ is expected given the large detuning between the qubit
sidebands and the cavity. On the other hand, for resonance condition
9, given by Δ_1^-=Δ_2^+ [Fig. <ref>(b)],
the resonant sidebands – the blue sideband of qubit 1 and the red
sideband of qubit 2 – are close to ω_c. Here, Δ/2π=Δ_1^-/2π=Δ_2^+/2π=-0.1 GHz
[see Fig. <ref>(b)]. In this case, we see that the
error again varies over approximately three orders of magnitude with
γ, but varies over less than two orders of magnitude with
κ. The increased variation with κ relative to resonance
condition 7 reflects the smaller detuning Δ of the resonant
qubit sidebands from the cavity. However, even in this case, we find
that the sensitivity of the error to κ is suppressed relative
to the sideband-based two-qubit gates in the driven resonant regime
(compare Fig. 7 in Ref. <cit.>).
While the reduced sensitivity to cavity decay is consistent with the
dispersive regime, it also reflects the absence of explicit cavity
dependence in the effective interaction Hamiltonian V_qq generating
the two-qubit entangling gates [Eq. (<ref>) and Table <ref>].
We have seen (Sec. <ref>) that for appropriately
chosen parameters, the gates generated by V_qq closely approximate
the dynamics due to the full interaction-picture Hamiltonian V_I
in Eq. (<ref>), where a and a^† are explicitly
present in general [see Eq. (<ref>)]. Thus, the effective
Hamiltonian we derive here illustrates that tuning the parametric
drive frequencies and amplitudes with the remaining parameters set
appropriately effectively enables suppression of the sensitivity to
cavity decay.
§ CONCLUSIONS
In this work, we have developed an approach for achieving long-range
interactions between a pair of driven spin qubits via cavity-mediated
coupling combined with sideband resonances. Our approach is applicable
to a variety of qubit types that can be controlled via parametric
driving, including one-electron spin qubits in double quantum dots,
three-electron RX qubits in triple quantum dots, and hole spin qubits,
and enables highly tunable qubit-qubit interactions whose nature can
be tailored via the driving fields. The interactions can also be switched
on and off using only ac control, without requiring dc tuning of the
qubits away from optimal operation points and thus allowing for improved
qubit coherence relative to resonant and standard dispersive approaches
for cavity-mediated qubit coupling.
We note that the approach we describe here is based on the driving
of inherently nonlinear effective two-level systems (i.e., qubits),
whereas commonly used off-resonant coupling approaches for weakly
anharmonic superconducting qubits such as transmons effectively involve
driving multilevel systems <cit.>. As transmon anharmonicities
are typically no more than a few hundred MHz, rapid and high-fidelity
gates in these systems are limited by leakage for sufficiently large
drive amplitudes and pulse bandwidths such that transitions to states
outside the qubit space can be excited, in the absence of additional
techniques that compensate for weak anharmonicity <cit.>.
Limits to the entangling fidelity and scalability of cross-resonance
<cit.> and FLICFORQ <cit.>
gates also exist due to small anharmonicities, required qubit frequency
spacing, available bandwidth for control, and spurious interaction
terms ∼σ_1^zσ_2^z that cannot be eliminated
for conventional transmons <cit.>. By contrast, spin qubit
systems such as those considered in this work are characterized by
highly nonlinear spectra in which differences of >1 GHz
in transition frequencies are routinely realized, including within
hybrid cQED systems <cit.>. Therefore,
spin qubits should in principle allow for greater flexibility in the
choice of amplitudes, frequencies, and gate times for achieving desired
high-fidelity gates via parametric driving, without requiring the
added complexity of low-anharmonicity compensation techniques.
For our parametrically driven, sideband-based approach, we expect
that limits on the driving amplitudes and entangling rates for implementing
high-fidelity gates will instead arise primarily from the requirements
that Δ_j and W_j are both integer multiples of the
same frequency η=2π/τ to eliminate the first-order interaction
in Eq. (<ref>), which sets a lower bound on τ and
thus τ_m since η≤|Δ_j|,W_j, together
with the conditions g_j≪ W_j required for the validity of
the effective Hamiltonian, the tuning of the drive amplitudes and
frequencies to a desired resonance condition and interaction (Table
<ref>), and the constraints for eliminating other
interaction terms and dynamics due to the parametric drive-induced
dispersive shifts. As we have shown in this work, multiple sets of
experimentally relevant <cit.> parameters exist for
which these requirements can be simultaneously satisfied (see, e.g.,
Table <ref>) in order to select desired and eliminate
undesired interaction terms.
We have analyzed specific examples of sideband-based entangling gates
that include a 200-ns double-excitation gate U_i DE, which
is generated via a blue-red sideband resonance and does not exist
for the standard dispersive regime in the absence of driving fields
<cit.>. Furthermore,
the rates of the entangling gates described in this work are set by
𝒥∝Δ^-1, in contrast to the typical ∼Δ_j^-1
scaling for standard dispersive entangling gate rates (where Δ_j=ω_c-ω_j
for resonant qubit driving). As it is possible to have Δ<Δ_j
for multiple sideband resonance conditions (see, e.g., Table <ref>),
the corresponding entangling gates can potentially be more rapid than
those in the standard dispersive regime. As the gates do not require
sequences of multiple qubit-cavity sideband pulses, the potential
also exists for gate speed improvements relative to the sideband-based
gates in the driven resonant regime considered in prior work <cit.>.
Unlike the resonant and standard dispersive approaches, realizing
cavity-mediated entangling interactions via the sideband resonance
method we describe here does not rely on simultaneous mutual resonance
among multiple qubit and cavity photon frequencies. Instead, several
sideband resonance conditions are available for generating entanglement
between dressed qubits even with all qubit and cavity frequencies
mutually off-resonant, providing enhanced spectral flexibility. As
a result, the sideband resonance-based approach represents a potential
alternative to the challenging tuning required to bring single-spin
qubit frequencies into simultaneous resonance via precisely oriented
micromagnets that has been essential to spin-spin coupling demonstrations
in silicon to date <cit.>. Together
with the suppressed sensitivity to cavity decay expected from our
analysis of example entangling gates, these features render the approach
we present in this work favorable for scaling and promising for the
implementation of modular quantum information processing with spin
qubits.
Supported by Army Research Office Grants W911NF-15-1-0149 and W911NF-23-1-0104.
§ HAMILTONIAN FOR CAVITY-COUPLED DRIVEN
SPIN QUBITS
The Hamiltonian H_p in Eq. (<ref>) describes parametrically
driven qubits coupled via the fundamental mode of microwave cavity
photons and forms the basis for the sideband-based cavity-mediated
entangling gates derived in this work. Here, we show how we obtain
H_p for the two specific examples of driven spin qubits illustrated
in Fig. <ref>.
§.§ Driven one-electron spin qubits in double quantum dots
We first consider two one-electron spin qubits in double quantum dots
(DQDs) coupled via a microwave cavity <cit.>.
In the following analysis, we take into account only the lowest-energy
orbital level in each dot. The charge dipole of the electron in each
DQD couples to the electric field of a microwave cavity photon <cit.>,
and the spin of the electron couples to the charge via spin-orbit
coupling and/or a magnetic field gradient <cit.>. We
focus on electrons occupying the lowest-energy valley states within
silicon quantum dots, for which spin-charge coupling is achieved through
gradient fields produced by a micromagnet <cit.>.
Assuming coupling to only the fundamental cavity photon mode with
frequency ω_c, we write the system Hamiltonian including
parametrically driven detuning as
H_s =ω_ca^†a+H_d+∑_j=1,2g_c,jτ_j^z(a+a^†),
H_d ≡1/2∑_j=1,2[ϵ_j(t)τ_j^z+2t_jτ_j^x+B_j^zs_j^z+B_j^xτ_j^zs_j^x],
where τ_j^z≡|L⟩_j⟨L|-|R⟩_j⟨R| with
|L⟩_j and |R⟩_j the lowest-energy orbital in the
left and right dots of DQD j, respectively, s_j^z≡|↑⟩_j⟨↑|-|↓⟩_j⟨↓|
is the Pauli z operator for the electron spin in DQD j, and
the other Pauli orbital (charge) and spin operators are defined accordingly.
The remaining parameters in Eqs. (<ref>) and (<ref>)
are the detuning ϵ_j between the orbital levels in the
left and right dot, the interdot tunnel coupling 2t_j, the Zeeman
splittings B_j^z and B_j^x due to the net magnetic field
components along the z and x axes, respectively (due to both
the external and micromagnet fields <cit.>), and the charge-cavity
coupling strength g_c,j for DQD j.
We describe sinusoidal (ac) driving of the detuning for each DQD via
Eq. (<ref>) and choose the “sweet spot” detuning
operation points ϵ_0,j=0 for j=1,2 where the first
derivative of the charge qubit frequency vanishes, enabling leading-order
protection from charge noise <cit.>.
As in the main text, all sums are over the qubit index j=1,2 unless
otherwise noted. We first apply the rotation
U_c=e^-i(π/4)∑_jτ_j^y
to the charge subspace. The system Hamiltonian H_s becomes
H_s^' =U_c^†H_sU_c
=ω_ca^†a+H_d^'-∑_jg_c,jτ_j^x(a+a^†)
-∑_jℱ_jcos(ω_j^dt+ϕ_j^')τ_j^x,
H_d^' =1/2∑_j=1,2(2t_jτ_j^z+B_j^zs_j^z-B_j^xτ_j^xs_j^x).
Writing the transformed DQD Hamiltonian H_d^' in the rotated
charge-spin product basis {|+,↑⟩_j,|-,↓⟩_j,|+,↓⟩_j,|-,↑⟩_j} ,
where |±⟩_j=(|L⟩_j±|R⟩_j)/√(2)
are the double-dot charge eigenstates for ϵ_0,j=0, reveals
a block-diagonal structure with two decoupled subspaces that we label
as ℋ_a,j and ℋ_b,j and that are spanned
by {|+,↑⟩_j,|-,↓⟩_j}
and {|+,↓⟩_j,|-,↑⟩_j} ,
respectively <cit.>.
Full diagonalization of the DQD low-energy space including spin for
ϵ_0,j=0 is then achieved by applying
U_d=e^i∑_j(Φ_a,jα̂_j^y+Φ_b,jβ̂_j^y)/2,
where α̂_j^y≡-i(|+,↑⟩_j⟨-,↓|-|-,↓⟩_j⟨+,↑|),
β̂_j^y≡-i(|+,↓⟩_j⟨-,↑|-|-,↑⟩_j⟨+,↓|),
and tanΦ_a(b),j=B_j^x/(2t_j± B_j^z).
For |2t_j-B_j^z|≪2t_j+B_j^z, tanΦ_a,j≪tanΦ_b,j
and the degree of mixing between |-,↓⟩_j and |+,↑⟩_j
is much smaller than that between |-,↑⟩_j and |+,↓⟩_j.
The eigenstates in the subspace ℋ_a,j can then be approximated
as <cit.> |0⟩_j≈|-,↓⟩_j
and |3⟩_j≈|+,↑⟩_j, and the corresponding
eigenvalues (for ħ=1) are ω_0,j=-𝒲_j/2
and ω_3,j=𝒲_j/2 with 𝒲_j≡√((2t_j+B_j^z)^2+(B_j^x)^2).
Setting Φ_j≡Φ_b,j for notational convenience, the
eigenstates in the subspace ℋ_b,j are given by
|1⟩_j ≡sin(Φ_j/2)|+,↓⟩_j+cos(Φ_j/2)|-,↑⟩_j,
|2⟩_j ≡cos(Φ_j/2)|+,↓⟩_j-sin(Φ_j/2)|-,↑⟩_j
and the corresponding eigenvalues are ω_1,j=-𝒱_j/2
and ω_2,j=𝒱_j/2 with 𝒱_j≡√((2t_j-B_j^z)^2+(B_j^x)^2).
Defining the operators σ_j^kl≡|k⟩_j⟨l|,
we find in the DQD eigenstate basis
H̃_s =U_d^†H_s^'U_d
=ω_ca^†a+∑_j∑_k=0^3ω_k,jσ_j^kk
-∑_jd̃_j[ℱ_jcos(ω_j^dt+ϕ_j^')+g_c,j(a+a^†)],
where we have defined
d̃_j ≡ U_d^†τ_j^xU_d
=d_j^01σ_j^01+d_j^02σ_j^02+d_j^13σ_j^13+d_j^23σ_j^23+ H.c.,
with d_j^01=-d_j^23≈sin(Φ_j/2) and
d_j^02=d_j^13≈cos(Φ_j/2) for |2t_j-B_j^z|≪2t_j+B_j^z.
The Hamiltonian in Eq. (<ref>) describes each DQD in
terms of a multilevel picture in which the ground state of the electron
is |0⟩_j≈|-,↓⟩_j and the highest excited
state is |3⟩_j≈|+,↑⟩_j, while the dominant
charge-spin character of |1⟩_j and |2⟩_j for ϵ_0,j=0
depends on the relative magnitudes of 2t_j and B_j^z <cit.>.
To reduce the DQD description to an approximate two-level picture
corresponding to a spin qubit, we choose the case 2t_j>B_j^z
(illustrated in Fig. 2(a) of Ref. <cit.>) such that
|1⟩_j≈|-,↑⟩_j is the first excited state
and |2⟩_j≈|+,↓⟩_j is the second excited
state for each DQD. We also assume ω_1,j-ω_0,j≪ω_2,j-ω_1,j,
which is equivalent to (𝒲_j-𝒱_j)/2𝒱_j≪1,
as well as ℱ_j/𝒱_j,g_c,j/𝒱_j≪1.
To first order in these small parameters, Eq. (<ref>)
can be written in the form
H_p^(1) ≡ω_ca^†a+∑_jω_j/2σ_j^z+∑_jg_jσ_j^x(a+a^†)
+∑_j2Ω_jcos(ω_j^dt+ϕ_j)σ_j^x,
where we have defined σ_j^z≡|1⟩_j⟨1|-|0⟩_j⟨0|
and the effective spin qubit frequencies ω_j≡ω_1,j-ω_0,j=(𝒲_j-𝒱_j)/2,
2Ω_j≡ℱ_j|sin(Φ_j/2)|
is the effective amplitude of the drive acting on the spin qubit,
g_j≡ g_c,j|sin(Φ_j/2)| is
the effective spin-photon coupling strength, and the sign of sin(Φ_j/2)
has been taken into account by making the replacement ϕ_j^'→ϕ_j
and redefining the phase of a accordingly in Eq. (<ref>).
The Hamiltonian H_p^(1) is identical in form to
Eq. (<ref>). Finally, we note that the Hamiltonian for DQD
charge qubits <cit.> at the operation points ϵ_0,j=0
[described by setting B_j^z=B_j^x=0 in Eq. (<ref>)]
can also be written in a form analogous to Eq. (<ref>), with
σ_j^z,x→τ_j^z,x, ω_j→2t_j,
2Ω_j→ℱ_j, and g_j→ g_c,j.
§.§ Three-electron resonant exchange qubits in triple quantum dots
We now consider two resonant exchange (RX) qubits, each defined by
the spin states of three electrons in a triple quantum dot <cit.>,
that interact via a microwave cavity <cit.>. In contrast
to one-electron spin qubits, each RX qubit couples directly to the
electric field of microwave cavity photons via an intrinsic electric
dipole moment. This dipole moment arises from the admixture of polarized
charge states in the qubit states, without requiring spin-orbit coupling,
magnetic gradients, or the fabrication of additional device elements
such as micromagnets. The Hamiltonian given in Eq. (29) of Ref. <cit.>
that is used to describe RX qubits coupled to a microwave cavity in
the driven resonant regime has the same general form as Eq. (<ref>).
Here, we briefly summarize the theory used to derive this Hamiltonian
for the case of RX qubits and adapt it to the case of silicon triple
quantum dots.
An effective Hamiltonian for each RX qubit can be obtained from a
Hubbard model for electrons confined within a linear triple quantum
dot <cit.>. This model can be used to calculate
a charge stability diagram (Fig. 1(b) of Ref. <cit.>)
that describes the triple dot in terms of the lowest-energy charge
configuration (n_1,n_2,n_3) (where n_i denotes
the occupation number for dot i and the occupation numbers are
ordered from the left dot to the right dot) as a function of the gate
voltages applied to the three dots, which set the on-site energies
(-ε_1,-ε_2,-ε_3).
For fixed V_ tot≡∑_iε_i, the lowest-energy
charge configuration depends on both the detuning ϵ≡(ε_3-ε_1)/2
and the relative middle dot on-site energy V_m≡-ε_2+1/2(ε_1+ε_3).
Distinct charge configurations are coupled via the left-center and
center-right interdot tunneling amplitudes t_l and t_r,
respectively.
References <cit.> and <cit.> consider a
three-electron system in the subspace of the charge configurations
(1,1,1), (2,0,1), and (1,0,2).
The relevant operation point is illustrated in Fig. 1(b) of Ref. <cit.>.
Here, we focus on a silicon triple dot and assume that a sufficiently
large (≳100 mT <cit.>) static magnetic
field is applied to the triple dot. Furthermore, we assume that excited
valley states are well-separated in energy from the lowest-energy
valley manifold (by a valley splitting energy E_ V≳100 μ eV
<cit.>), such that
a single-orbital picture is valid. We can then define a spin qubit
using three-electron states in the lower-energy S=1/2 spin subspace,
which have a spin quantum number for the total z component m_s=-1/2
due to the positive electron g-factor of silicon. The relevant subspace
is spanned by the (1,1,1) states
|e_0⟩ ≡ |s⟩_13|↓⟩_2
= 1/√(2)(c_1↑^†c_2↓^†c_3↓^†-c_1↓^†c_2↓^†c_3↑^†)|vac⟩,
|g_0⟩ ≡ √(1/3)|t_0⟩_13|↓⟩_2-√(2/3)|t_-⟩_13|↑⟩_2
= 1/√(6)(c_1↑^†c_2↓^†c_3↓^†+c_1↓^†c_2↓^†c_3↑^†-2c_1↓^†c_2↑^†c_3↓^†)|vac⟩,
together with the (2,0,1) and (1,0,2)
states
|s_1,-1/2⟩ ≡ |s⟩_11|↓⟩_3=c_1↑^†c_1↓^†c_3↓^†|vac⟩,
|s_3,-1/2⟩ ≡ |↓⟩_1|s⟩_33=c_1↓^†c_3↑^†c_3↓^†|vac⟩,
where | vac⟩ denotes the vacuum. In the basis {|e_0⟩,|g_0⟩,|s_1,-1/2⟩,|s_3,-1/2⟩} ,
the Hubbard Hamiltonian matrix has a form identical to that given
in Eq. (S7) of Ref. <cit.> for the case m_s=1/2.
The resonant exchange (RX) qubit is defined within an effective (1,1,1)
subspace {|̃ẽ_̃0̃⟩̃,|̃g̃_̃0̃⟩̃}
obtained by perturbatively eliminating the (2,0,1) and
(1,0,2) states via a Schrieffer-Wolff transformation
<cit.>. The resulting effective Hamiltonian
can be written in the form
H_ eff^(3)=J/2σ̃^z-√(3)/2jσ̃^x,
where σ̃^z≡|̃ẽ_̃0̃⟩̃⟨̃ẽ_̃0̃|̃-|̃g̃_̃0̃⟩̃⟨̃g̃_̃0̃|̃,
J≡(J_l+J_r)/2, j≡(J_l-J_r)/2,
and the exchange between the center and left (right) dots is J_l=t_l^2/(Δ+ϵ)
[J_r=t_r^2/(Δ-ϵ)]. Here,
Δ is defined in terms of Hubbard model parameters and V_m
<cit.>. Diagonalizing H_ eff^(3)
yields H_0=ωσ^z/2 with σ^z≡|1⟩⟨ 1|-|0⟩⟨ 0|,
where the eigenstates |0⟩ and |1⟩ define the RX qubit
and ω≡√(J^2+3j^2) is the qubit energy splitting
(here, we set ħ=1). As the exchange interactions J_l and
J_r between dots are controlled using only voltages applied to
the triple quantum dot, the RX qubit is a spin qubit that can be fully
manipulated using electric fields alone <cit.>.
In addition to being fully controllable via dc electric field pulses,
the RX qubit couples directly to microwave photons by virtue of the
small admixture of the polarized charge states (2,0,1) and (1,0,2)
in the logical qubit states <cit.>. This
feature enables full resonant control of the RX qubit via electric
fields, in direct analogy to resonant control of individual electronic
and nuclear spins via magnetic fields in electron spin resonance (ESR)
and nuclear magnetic resonance (NMR). The same charge admixture also
enables coupling of the qubit to the electric field of photons in
a microwave cavity, with a strength characterized by the charge admixture
parameter ξ≡ t/Δ (here, t≡ t_l=t_r). The
parameter ξ is a measure of the electric dipole moment of the
qubit and is inversely proportional to Δ, which sets the width
of the (1,1,1) region and is tunable via V_m.
We can write the Hamiltonian for the RX qubit including electric dipole
coupling as H_ RX=H_0+H_ int^', where H_ int^'
is the dipole interaction in the RX qubit basis. The operation point
(ϵ_0,Δ_0) for the RX qubit determines
the qubit frequency ω and the specific form of the electric
dipole moment. Variations in both ϵ and Δ about
this operation point enable coupling to microwave cavity photons <cit.>
and are implemented via gate voltage control of the on-site energies
-ε_i. Here, we focus on electric dipole coupling for
small variations in the detuning ϵ <cit.>.
For this case, the electric dipole moment of the RX qubit is along
the triple dot axis and can be described in terms of the operator
d=ew_0/2(n_1-n_3), where e is the magnitude
of the electron charge and w_0 is the size of the triple dot
(i.e., the distance between the centers of the outer dots). The operation
point ϵ=ϵ_0 for the RX qubit is chosen such that
the coupling to variations in the detuning ϵ [see, e.g.,
Eq. (<ref>)] are perpendicular to the quantization
axis of the RX qubit. In the basis of the RX qubit states, the dipole
interaction of the triple dot with the fundamental cavity mode then
takes the form <cit.>
H_ int^'=gσ^x(a+a^†),
where the effective qubit-photon coupling strength is given by
g = .g_c/2√((∂_ϵJ)^2+3(∂_ϵj)^2)|_ϵ=ϵ_0
and g_c is the charge-cavity coupling strength (an expression
for g_c is derived from a circuit model in Ref. <cit.>).
For the qubit operation point ϵ_0=0 chosen in this work,
t_l=t_r≡ t and g=√(3)ξ^2g_c/2. Thus, maximizing
ξ maximizes the coupling strength g.
In the driven resonant regime described in Ref. <cit.>,
two RX qubits interact with microwave cavity photons, whose frequency
we here denote as ω_c, in the presence of an external driving
field of frequency ω^d≡ν applied to the cavity. A
displacement transformation in the regime of large driving field amplitude
and large cavity-drive detuning |ω_c-ω^d|≫ g,|ω-ω^d|
then effectively transfers the drive from the cavity to the qubit
and leads to Eq. (29) in Ref. <cit.> for each qubit.
We can write this Hamiltonian for two RX qubits as
H_p^(3) =ω_ca^†a+∑_jω_j/2σ_j^z+∑_jg_jσ_j^x(a+a^†)
+∑_j2Ω_jcos(ω_j^dt+ϕ_j)σ_j^x,
where we have redefined the phase ϕ_j to match the form of
H_p in Eq. (<ref>). We see that, like H_p^(1)
in Eq. (<ref>), H_p^(3) has a form identical
to H_p. We also note that this form for the Hamiltonian can be
obtained for cavity-coupled RX qubits without a displacement transformation
by directly driving the detuning of the RX qubits <cit.>
as described by Eq. (<ref>). Thus, the sideband-based
interactions and associated entangling gates that we derive from H_p
can be implemented using both RX qubits and one-electron spin qubits.
§ ONE-QUBIT SECOND-ORDER TERMS IN EFFECTIVE
HAMILTONIAN
In this appendix, we analyze in more detail the one-qubit terms appearing
at second order in the effective Hamiltonian H_ eff(τ)
[Eq. (<ref>)], which originate from the commutators
[V_I,V_I^']_1 and [V_I,V_I^']_2
defined in Eqs. (<ref>) and (<ref>). The one-qubit
terms arising from [V_I,V_I^']_1 are given
by
Λ_1≡ 1/2iτ∫_0^τdt∫_0^tdt^'[V_I,V_I]_1
=1/2iτ∫_0^τdt∫_0^tdt^'∑_j{[A_j,A_j^']a^†2+[A_j^†,A_j^'†]a^2.
+.([A_j,A_j^'†]+[A_j^†,A_j^'])a^†a}
and those arising from [V_I,V_I^']_2 are
Λ_2 ≡1/2iτ∫_0^τdt∫_0^tdt^'∑_j(A_j^†A_j^'-A_j^'†A_j),
where A_j≡ A_j(t) and A_j^'≡ A_j(t^').
We note that the cavity photon operators a,a^† appear in
each term of Λ_1 but are absent from Λ_2.
Substituting for A_j and A_j^' using Eq. (<ref>)
shows that evaluating the double integrals in Eqs. (<ref>)
and (<ref>) amounts to evaluating double integrals
of products of exponentials of the form
h(μ,μ^') ≡1/2iτ∫_0^τdt∫_0^tdt^'e^-iμ te^iμ^'t^'
=
0, μ≠μ^'
-1/2μ μ=μ^'
or its complex conjugate h^∗(μ,μ^')=-h(-μ,-μ^').
Here, μ,μ^'∈{±Δ_j,±(Δ_j+W_j),±(Δ_j-W_j)} .
Since we set Δ_j=p_jη and W_j=q_jη with integers
p_j and q_j, we have μ_j=r_jη and μ_j^'=r_j^'η
where r_j and r_j^' are also integers. Equation (<ref>)
then becomes
Λ_1 =∑_jg_j^2({sinθ_jsin^2(θ_j/2)[h(-Δ_j-W_j,Δ_j)...
-.h(-Δ_j,Δ_j+W_j)]σ_j^+
+sinθ_jcos^2(θ_j/2)[h(-Δ_j+W_j,Δ_j).
-.h(-Δ_j,Δ_j-W_j)]σ_j^-
-sin^2(θ_j/2)cos^2(θ_j/2)[h(-Δ_j-W_j,Δ_j-W_j).
-..h(-Δ_j+W_j,Δ_j+W_j)]σ_j^z} a^†2
+{[sinθ_jsin^2(θ_j/2)h(-Δ_j-W_j,-Δ_j)..
+.sinθ_jcos^2(θ_j/2)h(-Δ_j,-Δ_j+W_j)]σ_j^+
+[sinθ_jcos^2(θ_j/2)h(-Δ_j+W_j,-Δ_j).
+.sinθ_jsin^2(θ_j/2)h(-Δ_j,-Δ_j-W_j)]σ_j^-
+[sin^4(θ_j/2)h(-Δ_j-W_j,-Δ_j-W_j).
-..cos^4(θ_j/2)h(-Δ_j+W_j,-Δ_j+W_j)]σ_j^z} a^†a
+. H.c.)
and Eq. (<ref>) becomes
Λ_2 =-∑_jg_j^2/2{[sinθ_jsin^2(θ_j/2)h(Δ_j,Δ_j+W_j)..
+.sinθ_jcos^2(θ_j/2)h(Δ_j-W_j,Δ_j)]σ_+
+[sinθ_jsin^2(θ_j/2)h(Δ_j+W_j,Δ_j).
+.sinθ_jcos^2(θ_j/2)h(Δ_j,Δ_j-W_j)]σ_-
+[sin^4(θ_j/2)h(Δ_j+W_j,Δ_j+W_j).
-..cos^4(θ_j/2)h(Δ_j-W_j,Δ_j-W_j)]σ_z}
+. H.c.)
Using Eqs. (<ref>) and (<ref>) together with
Eq. (<ref>), we can obtain the resonance conditions μ=μ^'
under which h(μ,μ^')≠0 and particular
one-qubit terms appear (i.e., are nonzero) in the effective Hamiltonian
H_ eff(τ). Note that these conditions are
identical for h(μ,μ^') and h^∗(μ,μ^').
The one-qubit resonance conditions and corresponding forms of the
effective Hamiltonian terms are summarized in Table <ref>.
We see that the a^2 and a^†2 terms occur only for
W_j=|2Δ_j| or Δ_j=0. Assuming that
we choose an operation point such that these conditions are not satisfied,
the a^2 and a^†2 terms can be eliminated from the
Hamiltonian. Additionally, the case W_j=0 corresponds to a vanishing
gap for dressed qubit j and thus does not have well-defined physical
meaning. We can therefore also ignore the σ_j^+a^†a,
σ_j^-a^†a, σ_j^+, and σ_j^-
terms. As a result, the only remaining one-qubit terms appearing at
second order in the effective Hamiltonian are those of the form σ_j^za^†a
in Eq. (<ref>) and σ_j^z in Eq. (<ref>).
These terms exist for any values of W_j and Δ_j. Finally,
we note that Eq. (<ref>) is well defined only for μ,μ^'≠0,
which leads to the constraints Δ_j≠0 and W_j≠|Δ_j|.
The full one-qubit contribution to H_ eff(τ)
for Δ_j,W_j≠0 and W_j≠|Δ_j|,|2Δ_j|
is thus
Λ ≡Λ_1+Λ_2
=∑_jg_j^2[sin^4(θ_j/2)/Δ_j+W_j-cos^4(θ_j/2)/Δ_j-W_j]σ_j^z(a^†a+1/2)
=-∑_jg_j^2[δ_j^2+2Δ_jδ_j+W_j^2/2W_j(Δ_j^2-W_j^2)]σ_j^z(a^†a+1/2),
where we have used cosθ_j=δ_j/W_j [see Eq. (<ref>)].
Equation (<ref>) represents the dispersive shift terms
induced by the parametric driving <cit.>.
§ QUBIT-QUBIT INTERACTION TERMS IN EFFECTIVE
HAMILTONIAN
The qubit-qubit interaction terms in H_ eff(τ)
[Eq. (<ref>)], and therefore the generated two-qubit
gates U(τ)≈ e^-iH_ effτ, are given
by Eq. (<ref>). Here, we derive the form of V_qq(τ)
for each of the nine resonance conditions μ_1=μ_2, where
μ_1∈{Δ_1,Δ_1+W_1,Δ_1-W_1}
and μ_2∈{Δ_2,Δ_2+W_2,Δ_2-W_2} .
As noted in the main text, these resonance conditions are obtained
by substituting Eq. (<ref>) into V_qq(τ),
which yields terms with frequencies μ_1-μ_2, and subsequently
applying Eq. (<ref>) to identify nonzero terms. In what
follows, we use the abbreviated notation Δ_j^±≡Δ_j± W_j
and refer to the resonance conditions by the numbers given in Table
<ref>.
First, we consider the resonance condition Δ_1=Δ_2
(resonance condition 1), which is equivalent to ω_1^d=ω_2^d
and therefore describes the driving of both qubits at the same frequency.
In order to select a specific form for the two-qubit interaction,
we also assume that the constraints W_1≠± W_2 are satisfied
so that terms in V_qq(τ) with frequencies μ_1-μ_2≠Δ_1-Δ_2
remain nonzero and vanish according to Eq. (<ref>). In
terms of the resonant detuning Δ≡Δ_1=Δ_2,
we find that Eq. (<ref>) becomes
V_qq(τ) =g_1g_2/4[2h(Δ,Δ)-2h(-Δ,-Δ)]
×sinθ_1sinθ_2σ_1^zσ_2^z
=-g_1g_2/2Δsinθ_1sinθ_2σ_1^zσ_2^z
=-2𝒥sinθ_1sinθ_2σ_1^zσ_2^z,
where we have defined 𝒥≡ g_1g_2/4Δ. For
resonant qubit driving such that δ_1=δ_2=0, W_1=2Ω_1,
W_2=2Ω_2, and sinθ_1=sinθ_2=1. In this
case, Δ_1=Δ_2 corresponds to ω_1=ω_2.
In other words, since the detunings of both qubit frequencies from
the cavity frequency ω_c are aligned, the center frequencies
of the driven qubits are also resonant. Equation (<ref>)
then reduces to
V_qq(τ)=-2𝒥σ_1^zσ_2^z,
which can be used to generate a controlled-phase gate (see Appendix
<ref>).
In a similar way, we can derive the form of the two-qubit interaction
and corresponding constraints for each of the remaining resonance
conditions. Resonance condition 2 (3), given by Δ_1=Δ_2^+
(Δ_1=Δ_2^-), is equivalent to ω_1^d=ω_2^d-W_2
(ω_1^d=ω_2^d+W_2). For Δ≡Δ_1=Δ_2^±,
we find
V_qq(τ) =∓g_1g_2/4[h(Δ,Δ)-h(-Δ,-Δ)]
×sinθ_1(1∓cosθ_2)σ_1^zσ_2^x
=±𝒥sinθ_1(1∓cosθ_2)σ_1^zσ_2^x,
with the same set of constraints W_1≠± W_2,±2W_2 for
each resonance condition. In the case of resonant qubit driving, the
resonance condition Δ_1=Δ_2^+ (Δ_1=Δ_2^-)
corresponds to ω_1=ω_2-2Ω_2 (ω_1=ω_2+2Ω_2)
such that the center frequency of qubit 1 is aligned with the red
(blue) sideband of qubit 2. In this case, sinθ_1=1 and
cosθ_2=0, which reduces Eq. (<ref>) to
V_qq(τ)=±𝒥σ_1^zσ_2^x
for Δ_1=Δ_2^±.
The resonance conditions Δ_1^+=Δ_2 and Δ_1^-=Δ_2
(resonance conditions 4 and 5) describe cases in which the roles of
qubits 1 and 2 are reversed relative to Δ_1=Δ_2^+
and Δ_1=Δ_2^-, respectively. Accordingly, Δ_1^+=Δ_2
(Δ_1^-=Δ_2) is equivalent to ω_1^d-W_1=ω_2^d
(ω_1^d+W_1=ω_2^d) and the qubit-qubit interaction
takes the form
V_qq(τ) =±𝒥(1∓cosθ_1)sinθ_2σ_1^xσ_2^z
for Δ≡Δ_1^±=Δ_2 and constraints such
that W_1↔ W_2 relative to the constraints for
resonance conditions 2 and 3. In the case of resonantly driven qubits,
Δ_1^+=Δ_2 (Δ_1^-=Δ_2) corresponds
to ω_1-2Ω_1=ω_2 (ω_1+2Ω_1=ω_2)
so that the center frequency of qubit 2 is aligned with the red (blue)
sideband of qubit 1, and Eq. (<ref>) becomes
V_qq(τ)=±𝒥σ_1^xσ_2^z
for Δ_1^±=Δ_2.
We next consider the resonance conditions Δ_1^+=Δ_2^+
and Δ_1^-=Δ_2^- (resonance conditions 6 and 7),
which are equivalent to ω_1^d-W_1=ω_2^d-W_2
and ω_1^d+W_1=ω_2^d+W_2, respectively. The
form of the effective qubit-qubit interaction for Δ≡Δ_1^±=Δ_2^±
is
V_qq(τ) =-𝒥(1∓cosθ_1)(1∓cosθ_2)
×(σ_1^+σ_2^-+σ_1^-σ_2^+),
with an identical set of constraints W_1≠ W_2,2W_2,W_2/2
for each resonance condition. For resonant driving of both qubits,
Δ_1^+=Δ_2^+ (Δ_1^-=Δ_2^-)
corresponds to ω_1-2Ω_1=ω_2-2Ω_2 (ω_1+2Ω_1=ω_2+2Ω_2),
such that the red (blue) sidebands of both qubits are in resonance,
and Eq. (<ref>) becomes
V_qq(τ)=-𝒥(σ_1^+σ_2^-+σ_1^-σ_2^+)
for both Δ=Δ_1^+=Δ_2^+ and Δ=Δ_1^-=Δ_2^-.
Finally, the resonance conditions Δ_1^+=Δ_2^-
and Δ_1^-=Δ_2^+ (resonance conditions 8 and 9)
are equivalent to ω_1^d-W_1=ω_2^d+W_2 and
ω_1^d+W_1=ω_2^d-W_2, respectively. The effective
qubit-qubit interaction for Δ≡Δ_1^±=Δ_2^∓
takes the form
V_qq(τ) =𝒥(1∓cosθ_1)(1±cosθ_2)
×(σ_1^+σ_2^++σ_1^-σ_2^-),
again with an identical set of constraints W_1≠-W_2,-2W_2,-W_2/2
for each resonance condition. Note that, for the physically relevant
case W_1,2>0, these constraints are always satisfied. For resonant
driving of both qubits, Δ_1^+=Δ_2^- (Δ_1^-=Δ_2^+)
corresponds to ω_1-2Ω_1=ω_2+2Ω_2 (ω_1+2Ω_1=ω_2-2Ω_2),
such that the red (blue) sideband of qubit 1 is in resonance with
the blue (red) sideband of qubit 2. Equation (<ref>) becomes
V_qq(τ)=𝒥(σ_1^+σ_2^++σ_1^-σ_2^-)
for both Δ=Δ_1^+=Δ_2^- and Δ=Δ_1^-=Δ_2^+.
The above nine resonance conditions, their associated constraints,
and the corresponding forms of the effective qubit-qubit interaction
V_qq(τ) for resonant qubit driving are summarized
in Table <ref> of the main text.
§ ELIMINATION OF DRIVE-INDUCED DISPERSIVE
SHIFT DYNAMICS FROM EFFECTIVE HAMILTONIAN
As shown in Appendix <ref>, the parametric drive-induced
dispersive shift terms Λ [Eq. (<ref>)]
in general exist for any W_j and Δ_j (provided W_j≠|Δ_j|).
Nevertheless, it is possible to effectively eliminate the dynamics
generated by Λ in specific cases by appropriately choosing
parameters and operation times such that the gate in the presence
of Λ is equivalent to that for Λ=0. Here, we first
illustrate this method for the resonant qubit driving case δ_j=0
and the resonance conditions 7 and 9 in Table <ref>
and Fig. <ref>. We then consider the effects of Λ
for additional resonance conditions, as well as the off-resonant qubit
driving case δ_j≠0 for which we can choose parameters
such that Λ=0.
We first consider resonant qubit driving, described by δ_j=0
such that W_j=2Ω_j, Λ=,
and χ_j≠0. From Eqs. (<ref>), (<ref>),
and (<ref>), the effective Hamiltonian for Δ_j=p_jη
and W_j=q_jη with p_j,q_j integers, Δ_j,W_j≠0,
and W_j≠|Δ_j|,|2Δ_j| is
given by
H_ eff(τ) =λ^2H̅_2(τ)
=V_qq(τ)+.
As discussed in the main text, the desired ideal two-qubit evolution
is generated by the qubit-qubit interaction term V_qq according
to U_m≡ e^-iV_qqτ_m, where τ_m≡ mτ=2π m/η
is the corresponding gate time. We now also define an evolution operator
with the dispersive shift terms included as U_m^'≡ e^-iH_ effτ_m,
where H_ eff is given by Eq. (<ref>).
From Eq. (<ref>), we note that V_qq is independent of
photon operators a,a^†. We also see from Eq. (<ref>)
that is diagonal in the photon
number n. We can therefore write
Λ_r=∑_nΛ_n,
where Λ_n≡ P_nP_n=⟨n||n⟩P_n
is the operator describing the dispersive shift terms within the n-photon
subspace and P_n≡|n⟩⟨n| is the corresponding subspace
projector, and work in a subspace of fixed n. Noting that V_qq(τ)
consists only of qubit operators [see Eq. (<ref>)], we
find H_ eff^(n)≡ P_nH_ effP_n=P_n(V_qq+)P_n=V_qq+Λ_n,
where we now use V_qq to denote the n-photon subspace operator.
In the basis {|ee,n⟩,|eg,n⟩,|ge,n⟩,|gg,n⟩} ,
Λ_n takes the form
Λ_n =(n+1/2)
× ([ χ_1+χ_2; χ_1-χ_2; -χ_1+χ_2; -χ_1-χ_2 ]),
while the form of V_qq depends on the particular resonance condition
according to Table <ref>. The evolution with the
dispersive shift terms included is described by P_nU_m^'P_n=e^-iH_ eff^(n)τ_m=e^-i(V_qq+)τ_m.
As in the main text, we use U_m and U_m^' to denote
the evolution operators within the n-photon two-qubit subspace
and suppress the photon number state |n⟩ in what follows.
We first consider resonance condition 7, which is defined by Δ≡Δ_1^-=Δ_2^-.
In the main text, we noted that the ideal evolution within the n-photon
subspace can be written as U_m=e^i𝒥τ_mΣ_x,
where Σ_x≡|eg⟩⟨ge|+|ge⟩⟨eg|. The ideal
operation U_m therefore acts nontrivially only within the two-dimensional
subspace {|eg⟩,|ge⟩} , and Σ_x
has an action analogous to the Pauli operator σ_x in this
subspace.
From Eq. (<ref>) and the form of V_qq for resonance
condition 7 [Eq. (<ref>)], we find that H_ eff^(n)
is block-diagonal. In this case, we can write H_ eff^(n)=PH_ eff^(n)P+QH_ eff^(n)Q,
where we have defined the two-qubit subspace projectors
P ≡|ee⟩⟨ee|+|gg⟩⟨gg|,
Q ≡|eg⟩⟨eg|+|ge⟩⟨ge|.
We can then characterize the effect of
on the dynamics for a given initial state |ψ_i⟩ via a
fidelity
F_s ≡|⟨ψ_i|U_m^†U_m^'|ψ_i⟩|^2
=|⟨ψ_i|P(PU_m^†P)(PU_m^'P)P|ψ_i⟩.
+.⟨ψ_i|Q(QU_m^†Q)(QU_m^'Q)Q|ψ_i⟩|^2,
where we have used U_m=PU_mP+QU_mQ and U_m^'=PU_m^'P+QU_m^'Q.
For resonance condition 7, we find
PH_ eff^(n)P =(n+1/2)(χ_1+χ_2)Σ_z,
QH_ eff^(n)Q =(n+1/2)(χ_1-χ_2)Σ_z-𝒥Σ_x,
where we have defined the operator Σ_z≡|eg⟩⟨eg|-|ge⟩⟨ge|
in analogy to the Pauli operator σ_z. For an initial state
|ψ_i⟩ in the subspace associated with Q, evolution
for Λ_n≠0 remains within this space such that U_m^'|ψ_i⟩=QU_m^'Q|ψ_i⟩.
In the main text, we consider the initial state |ψ_i⟩=|eg⟩.
For the i SWAP gate U_i SW, which is equivalent
to U_m at time τ_m=π/2𝒥 [see Eq. (<ref>)],
the ideal final state is |ψ_f⟩=U_i SW|ψ_i⟩=U_i SW|eg⟩=i|ge⟩.
We now set n=0 for simplicity [in what follows, generalization
of the expressions to any n using Eq. (<ref>) is straightforward
and amounts to the replacements (χ_1±χ_2)/2→(n+1/2)(χ_1±χ_2)].
In the presence of Λ_0 and defining the unit vector 𝐮̂≡[(χ_1-χ_2)ẑ-2𝒥x̂]/Ω̃
with Ω̃≡√((χ_1-χ_2)^2+4𝒥^2),
the action of the gate is modified to
QU_m^'Q|ψ_i⟩ =e^-iΩ̃τ_m/2𝐮̂·Σ|eg⟩
=[cos(Ω̃τ_m/2).
.-isin(Ω̃τ_m/2)(χ_1-χ_2/Ω̃)]|eg⟩
+isin(Ω̃τ_m/2)(2𝒥/Ω̃)|ge⟩.
Using Eqs. (<ref>) and (<ref>) along with PU_mP|ψ_i⟩=PU_m^'P|ψ_i⟩=0,
we find
F_s =|⟨ψ_i|Q(QU_m^†Q)(QU_m^'Q)Q|ψ_i⟩|^2
=4𝒥^2/Ω̃^2sin^2(Ω̃τ_m/2),
which oscillates with the modified frequency Ω̃ instead
of 𝒥 in the ideal case. Nevertheless, we can recover
the ideal dynamics and obtain F_s=1 for the i SWAP
gate time τ_m=π/2𝒥 by choosing parameters for
which χ_1=χ_2, such that QH_ eff^(n)Q=V_qq
and QU_m^'Q=QU_mQ. The constraints that must be satisfied
by the parameters for resonance condition 7 (in addition to those
listed in Table <ref>) are therefore (a) τ_m=π/2𝒥,
(b) χ_1=χ_2, and (c) w≡ p_1-q_1=p_2-q_2
from the resonance condition itself From constraint (a) and using
Δ=wη, we find τ_m≡2π m/η=π/2𝒥=2π wη/g_1g_2,
or
g_1g_2=w/mη^2,
while constraint (b) becomes, using Eq. (<ref>) with
Δ_j=p_jη, W_j=2Ω_j=q_jη, and Δ=Δ_1^-=Δ_2^-=wη
and incorporating constraint (c),
g_2^2/g_1^2=2+w/q_2/2+w/q_1.
We also note that q_1,q_2,p_1,p_2,w, and m must all
be integers.
There are multiple sets of parameter values that satisfy these constraints,
and we choose one set (given in Table <ref>) for the
gate fidelity analysis described in this work. To show that this approach
effectively restores the ideal i SWAP evolution in
the absence of the dispersive shift terms, we choose n=0 and compare
the ideal state at time τ_m for =0
with the numerical solution to the time-dependent Schrdinger equation
for the density matrix in the interaction picture, ρ̇_I=-i[V_I,ρ_I],
for V_I given by Eq. (<ref>), the initial state ρ_I(0)=|ψ_i⟩⟨ψ_i|=|eg,0⟩⟨eg,0|,
and the chosen set of parameters. For the numerical calculations,
we work in the photon subspace with n=0,1,2. The ideal final state
is given by
ρ_I^f(τ_m) =|ψ_f⟩⟨ψ_f|
=U_i SWρ_I(0)U_i SW^†
=|ge,0⟩⟨ge,0|.
For the comparison, we calculate the fidelity
F_0(τ_m)≡ Tr[ρ_I^f(τ_m)ρ_I^(0)(τ_m)],
where ρ_I^(0)(τ_m) denotes
the numerical solution in the absence of decay, i.e., for γ_j=κ=0
in the master equation of Eq. (<ref>). For the chosen
parameters, we find F_0≈0.998, confirming that setting
χ_1=χ_2 effectively eliminates the modification to the
dynamics induced by the dispersive shift terms
and restores the evolution expected for the i SWAP
gate.
We now apply this approach to resonance condition 9, which from Table
<ref> is given by Δ≡Δ_1^-=Δ_2^+.
From the form of V_qq [Eq. (<ref>)], the effective
Hamiltonian for this case is again block-diagonal and can be written
as H_ eff^(n)=PH_ eff^(n)P+QH_ eff^(n)Q.
As described in the main text, ideal evolution within the n-photon
subspace can be expressed as U_m=e^-i𝒥τ_mΣ_x^',
where Σ_x^'≡|ee⟩⟨gg|+|gg⟩⟨ee|.
The ideal evolution in this case occurs nontrivially only within the
two-dimensional subspace associated with P in Eq. (<ref>),
and we find
PH_ eff^(n)P =(n+1/2)(χ_1+χ_2)Σ_z^'+𝒥Σ_x^',
QH_ eff^(n)Q =(n+1/2)(χ_1-χ_2)Σ_z^',
where Σ_z^'≡|ee⟩⟨ee|-|gg⟩⟨gg|.
If we now assume that the initial state |ψ_i⟩ lies within
the subspace associated with P, evolution for Λ_n≠0
remains within this space such that U_m^'|ψ_i⟩=PU_m^'P|ψ_i⟩.
Here, we consider the initial state |ψ_i⟩=|ee⟩. For
the double-excitation gate U_i DE described in the main
text, which is equivalent to U_m at time τ_m=-π/2𝒥
[see Eq. (<ref>)] the ideal final state is |ψ_f⟩=U_i DE|ψ_i⟩=U_i DE|ee⟩=i|gg⟩.
Again, we set n=0 for simplicity [with straightforward generalization
of expressions to any n via (χ_1±χ_2)/2→(n+1/2)(χ_1±χ_2)].
In terms of 𝐮̂^'≡[(χ_1+χ_2)ẑ+2𝒥x̂]/Ω^'
with Ω^'≡√((χ_1+χ_2)^2+4𝒥^2,)
the modified action of the gate in the presence of _r
is given by
PU_m^'P|ψ_i⟩ =e^-iΩ^'τ_m/2𝐮̂^'·Σ^'|ee⟩
=[cos(Ω^'τ_m/2).
.-isin(Ω^'τ_m/2)(χ_1+χ_2/Ω^')]|ee⟩
-isin(Ω^'τ_m/2)(2𝒥/Ω^')|gg⟩.
Since QU_mQ|ψ_i⟩=QU_m^'Q|ψ_i⟩=0, Eq. (<ref>)
then reduces to
F_s =|⟨ψ_i|P(PU_m^†P)(PU_m^'P)P|ψ_i⟩|^2
=4𝒥^2/Ω^'^2sin^2(Ω^'τ_m/2).
F_s now oscillates with the modified frequency Ω^'
instead of 𝒥 in the ideal case. We can again recover
the ideal dynamics and obtain F_s=1 for the gate time τ_m=-π/2𝒥
by choosing parameters such that χ_1=-χ_2, which yields
PH_ eff^(n)P=V_qq and PU_m^'P=PU_mP.
Thus, the constraints that must be satisfied by the parameters for
resonance condition 9 (in addition to those listed in Table <ref>)
are (a) τ_m=-π/2𝒥, (b) χ_1=-χ_2,
and (c) w≡ p_1-q_1=p_2+q_2 from the resonance condition.
From constraint (a) and using Δ=wη, we find τ_m≡2π m/η=-π/2𝒥=-2π wη/g_1g_2,
which gives
g_1g_2=-w/mη^2.
On the other hand, constraint (b) with constraint (c) incorporated
becomes [again using Eq. (<ref>) with Δ_j=p_jη,
W_j=2Ω_j=q_jη, and Δ=Δ_1^-=Δ_2^-=wη],
g_2^2/g_1^2=2-w/q_2/2+w/q_1.
As before, q_1,q_2,p_1,p_2,w, and m must all be integers.
There are once again multiple sets of parameter values that satisfy
these constraints, and we choose one set (given in Table <ref>)
for the analysis described in this work. We now show that the ideal
evolution described by U_i DE in the absence of the dispersive
shift terms is restored for χ_1=-χ_2 and the chosen parameters.
Working in the photon subspace with n=0,1,2 and numerically solving
ρ̇_I=-i[V_I,ρ_I] for resonance condition
9 with the initial state ρ_I(0)=|ψ_i⟩⟨ψ_i|=|ee,0⟩⟨ee,0|,
we compare the γ_j=κ=0 solution ρ_I^(0)(τ_m)
at time τ_m with the ideal (=0)
final state
ρ_I^f(τ_m) =|ψ_f⟩⟨ψ_f|
=U_i DEρ_I(0)U_i DE^†
=|gg,0⟩⟨gg,0|
via the fidelity in Eq. (<ref>). We find F_0≈0.998,
showing that setting χ_1=-χ_2 effectively eliminates
the modification to the dynamics induced by the dispersive shift terms
and restores the expected gate evolution generated by V_qq for
resonance condition 9. The small error 1-F_0 found for both resonance
conditions 7 and 9 also serves as a measure of the validity of the
model that we develop in this work.
By choosing parameters that satisfy additional constraints, a similar
approach can be used to eliminate dispersive shift dynamics for general
initial states |ψ_i⟩ with arbitrary photon number n,
as well as for other resonance conditions and the corresponding interactions
and two-qubit gates listed in Table <ref>. For a
general initial state
|ψ_i⟩ =(c_ee|ee,n⟩+c_eg|eg,n⟩.
.+c_ge|ge,n⟩+c_gg|gg,n⟩),
both the subspaces associated with P and Q must be taken into
account. For resonance condition 7, we find
PU_m^'P|ψ_i⟩ =Pe^-iτ_mP
=e^-i(n+1/2)(χ_1+χ_2)τ_m|ee,n⟩⟨ee,n|
+e^i(n+1/2)(χ_1+χ_2)τ_m|gg,n⟩⟨gg,n|
and PU_mP|ψ_i⟩=P|ψ_i⟩, while for resonance
condition 9,
QU_m^'Q|ψ_i⟩ =Qe^-iτ_mQ
=e^-i(n+1/2)(χ_1-χ_2)τ_m|eg,n⟩⟨eg,n|
+e^i(n+1/2)(χ_1-χ_2)τ_m|ge,n⟩⟨ge,n|
and QU_mQ|ψ_i⟩=Q|ψ_i⟩. From the forms of Eqs. (<ref>)
and (<ref>), we see that setting χ_j=2π k_j
with k_j an integer for j=1,2, along with k_1=k_2 (k_1=-k_2)
for resonance condition 7 (9) to satisfy χ_1=χ_2 (χ_1=-χ_2)
eliminates the dynamical phases due to
and restores the ideal evolution due to V_qq within a subspace
of fixed n, or for an integer average photon number ⟨ a^†a⟩ .
We now consider the other resonance conditions in Table <ref>.
Resonance condition 1 is given by Δ≡Δ_1=Δ_2
and (assuming the associated constraints listed in the table are satisfied
by W_1 and W_2) yields V_qq=-2𝒥σ_1^zσ_2^z.
This interaction is equivalent to that which generates a Mlmer-Srensen
gate in the original qubit basis <cit.> and can be used
to construct a controlled-phase gate in the dressed qubit basis considered
here. Using U_m=e^-iV_qqτ_m=e^i2𝒥τ_mσ_1^zσ_2^z,
a sequence for a controlled-phase gate is given by
U_φ =e^i2𝒥τ_me^-i2𝒥τ_mσ_1^ze^-i2𝒥τ_mσ_2^zU_m
=([ 1; 1; 1; e^iφ ]),
where φ≡8𝒥τ_m, which shows that U_m
for Δ≡Δ_1=Δ_2 is equivalent to a controlled-phase
gate up to single-qubit rotations. For φ=(2l+1)π
or τ_m=(2l+1)π/8𝒥 with l an integer,
U_φ represents a controlled-Z gate U_CZ.
To determine conditions for eliminating the dispersive shift dynamics
in the case of resonance condition 1, we can compare the action of
U_m and U_m^'=e^-i(V_qq+)τ_m=e^-iτ_mU_m,
both of which now act in the full two-qubit space, on a general state
|ψ_i⟩ in this space [Eq. (<ref>)] via
F_s =|⟨ψ_i|U_m^†U_m^'|ψ_i⟩|^2
=|⟨ψ_i|e^-iτ_m|ψ_i⟩|^2
=||c_ee|^2e^-i(n+1/2)(χ_1+χ_2)τ_m.
+|c_eg|^2e^-i(n+1/2)(χ_1-χ_2)τ_m
+|c_ge|^2e^i(n+1/2)(χ_1-χ_2)τ_m
.+|c_gg|^2e^i(n+1/2)(χ_1+χ_2)τ_m|^2.
In this case, we can in principle recover the ideal dynamics and obtain
F_s=1 for any state |ψ_i⟩ by choosing parameters
that satisfy χ_j=2π k_j with k_j an integer for j=1,2,
and that also satisfy τ_m=-φ/8𝒥, p_1=p_2
from the resonance condition, and q_1≠± q_2 from Table
<ref> to obtain a controlled-phase gate U_φ
according to the construction in Eq. (<ref>). For U_CZ,
τ_m=(2l+1)π/8𝒥, and we can proceed
to identify potential parameter sets by choosing, e.g., l=0 and
setting τ_m≡2π m/η=-π/8𝒥 in analogy
to the analysis described above for resonance conditions 7 and 9.
The remaining physically distinct case in Table <ref>
is that of resonance condition 4, given by Δ≡Δ_1^+=Δ_2
and corresponding to V_qq=𝒥σ_1^xσ_2^z
when W_1 and W_2 satisfy the listed constraints. This interaction
can be used to construct a controlled-NOT (CNOT) gate <cit.>.
We note that V_qq, and therefore U_m and U_m^',
are block-diagonal with the associated subspace projectors P^'=|ee⟩⟨ee|+|ge⟩⟨ge|
and Q^'=|eg⟩⟨eg|+|gg⟩⟨gg|. If we choose
|ψ_i⟩=|eg⟩ such that P^'U_mP^'|ψ_i⟩=P^'U_m^'P^'|ψ_i⟩=0,
we find from a calculation similar to that in Eqs. (<ref>)
and (<ref>),
F_s =|⟨ψ_i|U_m^†U_m^'|ψ_i⟩|^2
=4𝒥^2/Ω̅^2sin^2(Ω̅τ_m/2)
with Ω̅≡√(χ_1+4𝒥^2). In this
case, recovering the dynamics for =0
requires χ_1=0, which is not possible for δ_j=0
and g_1,2Ω_1≠0 [see Eq. (<ref>)].
Similar results hold for other initial states |ψ_i⟩. While
F_s=1 is not achievable exactly for resonance condition 4 when
δ_j=0, choosing appropriate parameters to minimize χ_j
in principle allows F_s to be made arbitrarily close to unity,
enabling the dynamics for =0 and
thus the gate generated by V_qq=𝒥σ_1^xσ_2^z
to be well approximated. The dispersive shift dynamics can also be
eliminated in this case for δ_j≠0, as we describe below.
The other resonance conditions in Table <ref> lead
to effective interactions V_qq that are either identical in form
to those considered above or have the roles of qubits 1 and 2 reversed.
Finally, we consider the conditions under which the one-qubit contribution
to H_ eff(τ), including the dispersive shift
terms Λ, can be made to vanish for each qubit individually.
In this case, H_ eff(τ)=V_qq(τ)
consists solely of qubit-qubit interaction terms. Setting Λ=0
in Eq. (<ref>) yields the condition δ_j^2+2Δ_jδ_j+W_j^2=0,
or equivalently δ_j^2+2p_jηδ_j+q_j^2η^2=0.
We note that for resonant driving, described by δ_j=0, the
one-qubit contribution Λ≠0 unless W_j=0, which is
not physically meaningful (see Appendix <ref>).
Thus, provided the drive frequency is not on resonance with the qubit
frequency (ω_j^d≠ω_j), the dispersive
shift terms can be eliminated individually for each qubit by an appropriate
choice of parameters.
|
http://arxiv.org/abs/2307.03911v1 | 20230708060513 | A Novel Pseudo-Random Number Generator Based on Multi-Objective Optimization for Image-Cryptographic Applications | [
"Takreem Haider",
"Saúl A. Blanco",
"Umar Hayat"
] | cs.CR | [
"cs.CR",
"cs.IT",
"math.IT"
] |
aa]Takreem Haiderbb
[email protected]
aa]Saúl A. Blanco
[email protected]
ab]Umar Hayatbb
[email protected]
[bb]Corresponding author
[aa]Department of Computer Science, Indiana University Bloomington, IN 47408, USA
[ab]Department of Computer Science, University of Surrey, Guildford, Surrey, GU2 7XH, UK
Pseudo-random number generators (PRNGs) play an important role to ensure the security and confidentiality of image cryptographic algorithms. Their primary function is to generate a sequence of numbers that possesses unpredictability and randomness, which is crucial for the algorithms to work effectively and provide the desired level of security.
However, traditional PRNGs frequently encounter limitations like insufficient randomness, predictability, and vulnerability to cryptanalysis attacks.
To overcome these limitations, we propose a novel method namely an elliptic curve genetic algorithm (ECGA) for the construction of an image-dependent pseudo-random number generator (IDPRNG) that merges elliptic curves (ECs) and a multi-objective genetic algorithm (MOGA).
The ECGA consists of two primary stages. First, we generate an EC-based initial sequence of random numbers using pixels of a plain-image and parameters of an EC, that depart from traditional methods of population initialization.
In our proposed approach, the image itself serves as the seed for the initial population in the genetic algorithm optimization, taking into account the image-dependent nature of cryptographic applications. This allows the PRNG to adapt its behavior to the unique characteristics of the input image, leading to enhanced security and improved resistance against differential attacks.
Furthermore, the use of a good initial population reduces the number of generations required by a genetic algorithm, which results in decreased computational cost.
In the second stage, we use well-known operations of a genetic algorithm to optimize the generated sequence by maximizing a multi-objective fitness function that is based on both the information entropy and the period of the PRNG.
By combining elliptic curves and genetic algorithms, we enhance the randomness and security of the ECGA.
To evaluate the effectiveness and security of our generator, we conducted comprehensive experiments using various benchmark images and applied several standard tests, including the National Institute of Standards and Technology (NIST) test suite.
We then compared the results with the state-of-the-art PRNGs.
The experimental results demonstrate that the ECGA outperforms the state-of-the-art PRNGs in terms of uniformity, randomness, and cryptographic strength.
Pseudo-random number generatorElliptic curveGenetic algorithmMulti-objective optimization
§ INTRODUCTION
Pseudo-random number generators (PRNGs) are extensively used in numerous fields, such as statistics, computer science, cryptography, and gaming <cit.>. PRNGs generate a sequence of numbers that appear random but are produced from a predetermined starting point using a mathematical formula. The quality of a PRNG is of utmost importance in the field of cryptography as the security of a cryptographic system depends on the randomness and unpredictability of the keys generated.
A good PRNG should possess the following key features to ensure the quality and security of the generated random
numbers <cit.>.
1)
Randomness: To achieve the quality of a reliable PRNG, the generated number sequence must exhibit no distinguishable characteristics when compared to a truly random sequence. The generated output should be devoid of any identifiable patterns, and each number in the sequence should not correlate with the numbers that precede or follow it.
2)
Unpredictability: The PRNG needs to possess resistance against attacks aimed at forecasting future outputs by analyzing past outputs. To achieve this, the PRNG must possess a substantial internal state and employ a robust cryptographic algorithm.
3)
Periodicity: Each PRNG has a specific point at which the sequence it produces starts repeating. A PRNG is considered to be of high quality if its period is significantly longer, meaning it is approximately equal to the total number of possible outputs.
4)
Security: The primary focus of the PRNG should be on maintaining strong security measures and guaranteeing resilience against commonly recognized forms of attacks, such as brute-force attacks.
5)
Efficiency: The PRNG should possess efficiency in terms of both computational speed and memory consumption, particularly for tasks that involve generating a substantial number of random values, such as the creation of cryptographic keys.
Designing a PRNG with optimal randomness is a challenging task that requires balancing many different factors <cit.>. The quality of random numbers generated by a PRNG is crucial in many applications, and it is essential to carefully evaluate and choose a PRNG that meets the quality and security requirements of the application or system.
In recent years, chaotic systems have become popular in the development of PRNGs due to their desirable features such as unpredictability, irreducibility, sensitivity to initial conditions, ergodicity, and chaoticity <cit.>.
Various PRNGs have been designed based on chaotic maps, for instance, PRNGs described
in <cit.>.
Murillo et al. <cit.> introduced a PRNG that uses an improved 1D logistic map to generate pseudo-random numbers with strong statistical characteristics.
Hamza <cit.> presented a method that uses the Chen chaotic system to construct a PRNG for cryptographic purposes involving images. This method <cit.> addresses the issue of non-uniform distribution commonly found in pseudo-random number sequence (PRNS) generated by the Chen chaotic system and produces PRNS with a high level of randomness.
Xia and Zheng <cit.>, developed a novel PRNG that utilizes a controlled digital chaotic system. The purpose of this generator is to enhance the dynamic degradation that arises from the use of chaotic systems.
Meranza et al. <cit.> utilizes an improved version of the Henon map to design a PRNG. Their research indicates that the cryptographic properties of PRNS generated by the enhanced Henon map are superior to those produced by the traditional Henon map.
Barani et al. <cit.>, designed a PRNG for creating PRNS by utilizing a generalized Newton complex map. To ensure the randomness of the generated sequences, several security measures were implemented and the outcomes indicated that this generator can produce secure PRNS.
Zhao et al. <cit.> used a hyper-chaotic system to design a PRNG that exhibits high levels of randomness.
In addition, Wang et al. <cit.> constructed a PRNG that is based on a logistic chaotic system.
Gayoso et al. <cit.> introduced a new PRNG that utilizes the residue number system, which enables the creation of an exceptionally efficient circuit that operates distinctly compared to conventional generators.
Furthermore, in <cit.>, the authors create a structure resembling a Hopfield neural network where each neuron is substituted with a compact PRNG.
Yu et al. <cit.> developed a PRNG that uses a chaotic system and an improved Hopfield neural network. Their PRNG is designed to decrease the impact of chaotic degradation and enhance the quality of PRNS.
Cang et al. <cit.> presented a PRNG based on a generalized conservative Sprott-A chaotic system.
Agarwal et al. <cit.> designed a PRNG that is based on the cascade fractal function. The cascade function is created using a combination of two seed maps, which improves the unpredictability and randomness of PRNG.
Shi and Deng <cit.> proposed a new PRNG that is based on Baker chaotic map and can generate highly random PRNS.
Zang et al. <cit.> developed an algorithm for generating PRNS using complex polynomial chaotic maps. The PRNS generated by this method shows strong randomness and are vulnerable to differential attacks.
A significant limitation associated with chaos-based cryptography arises from the fact that chaotic maps are designed to work with real numbers, which is not ideal for cryptographic applications that use finite numbers.
Round-off errors in quantizing real numbers can create issues that result in irreversible functions, making decryption impracticable.
The use of elliptic curve cryptography (ECC) is gaining popularity in modern cryptographic applications due to its effectiveness, strong security measures, and resilience against attacks.
The difficulty of solving the elliptic curve discrete logarithm problem (ECDLP) is a significant factor that motivates the preference for elliptic curves (ECs) over chaotic maps in the design of cryptographic algorithms.
Furthermore, ECs exhibit the advantage of necessitating significantly reduced key sizes in comparison to chaotic maps. This characteristic renders them more efficient and viable for implementation within environments that face limitations in terms of available resources.
As a result, various PRNGs using the arithmetic of ECs have been designed.
Hayat and Azam <cit.>, proposed an algorithm for generating PRNS which is based on ordered ECs.
This method is efficient when compared with previously introduced PRNGs over ECs.
However, the generator <cit.> is not suitable for ECs over large prime p due to
high space and time complexity such as 𝒪(p) and 𝒪(p^2), respectively.
A PRNG based on Mordell elliptic curve is introduced by Ullah et al. <cit.>, which is more efficient and has better cryptographic properties than <cit.>.
However, the time and space complexity of the generator <cit.> is 𝒪(mp) and 𝒪(m), respectively, where m ≤ p is the size of PRNS, due to which this generator is not compatible with ECs associated with large prime p.
An isomorphic EC-based PRNG, developed by Haider et al. <cit.>, produces sequences with high randomness and outperforms existing generators in terms of cryptographic properties; however, it faces compatibility issues with large prime ECs when the parameters of EC and the size of the ordered set are not predetermined.
Recently, Adhikari and Karforma <cit.> presented a PRNG over large prime ECs. To generate pseudo-random numbers, firstly, the y-coordinate of the generated point over the EC is extracted and then the least significant 8 bits of the extracted y-coordinate are converted to its decimal representation.
Although this PRNG is compatible with ECs over large primes to obtain PRNS with length ℓ, this algorithm needs to generate ℓ number of points over the EC. So, for large ℓ, this method is not suitable for real-world applications.
The existing EC-based PRNGs have exhibited favorable outcomes, however, they do not guarantee the generation of PRNs with security levels closely approximating the theoretically optimal values.
§.§ Our contribution
To address the aforementioned issues, our focus is directed toward the design of a PRNG that produces random numbers of high quality, exhibiting optimal randomness. The following steps outline our contributions:
1) To overcome the challenges posed by low randomness, predictability, and vulnerability to cryptanalysis attacks, we employ a multi-objective genetic algorithm (MOGA) optimization technique. This approach is chosen because MOGA allows us to simultaneously optimize multiple objectives, such as randomness, predictability, and resistance to cryptanalysis attacks. By employing MOGA, we can enhance the overall quality and security of the generated random numbers.
2) Our method takes advantage of the image itself as the seed for the initial population in the genetic algorithm optimization process. This choice is motivated by the image-dependent nature of cryptographic applications. By using the image as the seed, the PRNG can adapt its behavior to the unique characteristics of the input image. This adaptation improves the security of the PRNG, making it more resistant to differential attacks and increasing the level of protection against potential threats.
3) We generate an initial sequence of random numbers based on both the plain-image and elliptic curve. This departure from traditional methods of population initialization is selected for a specific reason. By using the plain-image and elliptic curve, we ensure that the initial solution provided to the genetic algorithm is well-chosen and has desirable properties. This decreases the number of generations required by the genetic algorithm and thus minimizes the overall computation time.
4) The genetic algorithm is utilized to improve the generated sequence by maximizing a fitness function that considers both information entropy and the period of the pseudo-random sequence. This choice is made because the genetic algorithm is effective in optimizing problems that have multiple objectives. By maximizing the fitness function, we can enhance the information entropy and period of the generated sequence, leading to superior-quality random numbers.
5) To evaluate the performance and security of our proposed PRNG, extensive experiments are conducted using various benchmark images. The results are then compared with the existing state-of-the-art PRNGs. The experiments serve as empirical evidence supporting our claims of enhanced performance and security.
§.§ Paper organization
The rest of the paper is organized as follows.
In <ref>, we present the preliminary theoretical background and notions used in the paper.
Moreover, in <ref>, we describe our method ECGA for the construction of the proposed IDPRNG based on elliptic curves and a genetic algorithm.
In <ref>, we present a comprehensive security analysis and the comparison of the ECGA with the
state-of-the-art PRNGs, establishing empirically that our method is superior.
Finally, in <ref>, we summarize the findings of the ECGA and discuss possible future directions of our line of work.
§ PRELIMINARIES
In this section, we present fundamental concepts that are used in our algorithm and hold significance for comprehension.
§.§ Elliptic Curves
Let p>3 be a prime number and a,b be two integers, and we use 𝔽_p to denote the finite field of p elements.
An elliptic curve (EC) denoted by E_p,a,b is defined as a set of solutions (x,y) ∈𝔽_p ×𝔽_p satisfying the equation y^2≡ x^3+ ax +b p, along with an additional point at infinity denoted by ∞, where a, b ∈𝔽_p and 4a^3+27b^2≢0 p.
The group law operation +_g on an elliptic curve E_p,a,b is defined as follows <cit.>. We denote the multiplicative inverse of a∈𝔽_p by a^-1.
Point addition:
Let G_1 = (x_1,y_1) ans G_2 = (x_2,y_2) be two points on E_p,a,b such that G_1≠ G_2 and
x_1≠ x_2, then
the computation of the resulting point G_1 +_g G_2 = (x_3, y_3) when performing point addition over E_p,a,b is given as:
λ ≡ (y_2 - y_1)(x_2 - x_1)^-1p.
x_3 ≡λ^2 - x_1 - x_2p,
y_3 ≡λ(x_1 - x_3) - y_1p.
Point doubling:
Let G_1 = (x_1,y_1) be a point on E_p,a,b such that y_1≠ 0, then
the computation of the resulting point G_1 +_g G_1=(x_3, y_3) when performing point doubling over E_p,a,b is given as:
λ ≡ (3 x^2_1)(2 y_1)^-1p
x_3 ≡λ^2 - 2x_1p,
y_3 ≡λ(x_1 - x_3) - y_1p.
§.§ Genetic Algorithm
In recent times, there has been significant interest among researchers in evolutionary algorithms (EAs), which have been recognized as valuable in numerous applications (see, e.g., <cit.> and references within). One of the most widely recognized types of EAs is genetic algorithms (GAs),
which are search heuristics. Several applications have been devised using genetic algorithms <cit.>. Fundamentally, a GA can be divided into four primary stages <cit.>, which we describe below.
* Initial population:
During the creation of the population, an initial set of individuals or solutions is generated. These individuals are designed to form the starting point for the genetic algorithm. They represent possible solutions to the problem being addressed and are encoded in a format that allows the algorithm to work with and manipulate them.
* Selection: During the selection step, a portion of individuals from the population is selected as parents for the next generation. This selection is primarily determined by the fitness or quality of each individual solution. Individuals with higher fitness scores are more likely to be chosen as parents since they are considered to have superior solutions.
* Crossover: During the crossover step, new solutions are generated by merging the genetic material of chosen parent individuals. The genetic material, typically represented as chromosomes or sequences of values, is swapped between parents to produce offspring. This procedure emulates the natural recombination of genes that takes place during reproduction.
* Mutation: Mutation involves a spontaneous and random alteration in a specific aspect or trait of a solution. During the mutation step, slight modifications are made to the genetic composition of individual solutions. This randomness plays a crucial role in introducing fresh genetic variations within the population.
A genetic algorithm explores the search space through multiple iterations, aiming to discover an optimal or nearly optimal solution for the given problem.
§ ELLIPTIC CURVE GENETIC ALGORITHM (ECGA)
In this section, we provide a novel method namely elliptic curve genetic algorithm (ECGA) for generating pseudo-random numbers by using elliptic curves and a genetic algorithm. The effectiveness of a PRNG highly depends on its tendency to produce random and unpredictable sequences.
The quality of a PRNG is significantly enhanced by two important factors, namely high entropy and a long period.
The presence of high entropy in a sequence of numbers makes it difficult for an attacker to predict the next number, while a long period decreases the chances of repetitive patterns that could be used for exploitation.
We propose a generator that consists of two important stages. Initially, we utilize points on elliptic curves to create a sequence of random numbers.
Subsequently, we employ the operations of a genetic algorithm to maximize a fitness function that considers multiple objectives. This fitness function guarantees a higher degree of randomness by considering both the information entropy and the period of the sequence.
By employing an appropriate initial solution based on ECs, the optimization process reduces the number of generations needed, thus decreasing computational time.
Furthermore, our proposed method ECGA enhances the unpredictability and randomness of the PRNG by incorporating elliptic curves and a genetic algorithm.
We provide a concise overview of each stage of the ECGA as follows.
§.§ Initialization
We use elliptic curves to generate an initial sequence of pseudo-random numbers.
The initialization process comprises the following procedures:
Step 1:
Let I be a plain-image in a two-dimensional array with dimensions r× s, where each element belongs to the symbol set [0, 2^m-1] and m represents the number of bits in the pixel of an image.
Here, if n_1,n_2 are two integers with n_1<n_2, [n_1,n_2]={n_1,n_1+1,…,n_2}.
We consider an elliptic curve E_p,a,b defined by the equation y^2≡ x^3+ax +b p, where p, a, and b are parameters of the curve and G=(x_0,y_0) is a base point on the curve.
Let us assume that H_I, H_a, H_b, and H_p denote the SHA-256 hash value of I, a, b, and p, respectively.
The proposed approach begins by selecting an initial point denoted as G_0=(x_0,y_0) on E_p, a,b. Subsequently, a series of n points G_k=(x_k,y_k) are generated, where 1 ≤ k ≤ n.
Step 2:
Calculate the binary values for the x and y coordinates of the point G_k=(x_k,y_k), with 1≤ k≤ n.
We define the function B(x_k) that takes as input the decimal representation of x_k and outputs its binary representation as a sequence of u bits: B(x_k) =x_k^1,x_k^2, … , x_k^u, where u is the number of bits needed to represent x_k in binary form. Similarly, we define the function B(y_k) that takes as input the decimal representation of y_k and outputs its binary representation as a sequence of v bits: B(y_k) =y_k^1,y_k^2, … , y_k^v, where v is the number of bits needed to represent y_k in binary form.
Step 3:
From Step 1, let ∈{ I, p, a, b } and H_ be the SHA-256 sets from Step 1. Namely,
H_ = { h_^i : 1 ≤ i ≤ 256, h_^i∈{ 0, 1 }}.
We furthermore define B^x_a,b to be a binary sequence of length 3 ℓ', where ℓ'= min(u,256), obtained by repeatedly merging the corresponding binary bits of H_a, B(x_k), and H_b in an alternating manner as follows:
B^x_a,b = h_a^1, x_k^1, h_b^1, h_a^2, x_k^2, h_b^2, …, h_a^ℓ',x_k^ℓ',h_b^ℓ'.
Similarly, we define B^y_I,p to be a binary sequence of length 3 ℓ”, where ℓ”= min(v,256), obtained by merging the binary bits of H_I, B(y_k), and H_p as follows:
B^y_I,p = h_I^1, y_k^1, h_p^1, h_I^2, y_k^2, h_p^2, …, h_I^ℓ”,y_k^ℓ”,h_p^ℓ”.
Step 4:
Generate a binary sequence B^x,y_I, p, a,b of length ℓ, where ℓ = 3(ℓ' + ℓ”), by concatenating B^x_a,b and B^y_I,p as follows:
B^x,y_I, p, a,b = B^x_a,b +_C B^y_I,p.
Note that the concatenation operation denoted by +_C is simply the operation of appending the second sequence to the end of the first sequence.
Step 5:
Let B_z denote a randomly selected binary sequence of length ℓ. We define the resultant binary sequence B^x,y_I, p, a,b,z of length ℓ as follows:
B^x,y_I, p, a,b,z=
0 if ξ_i=ξ'_i ∀ i∈{ 1,2,…,ℓ}
1 if ξ_i≠ξ'_i ∀ i∈{ 1,2,…,ℓ},
where ξ_i and ξ'_i are the i-th element of the binary sequences B^x,y_I, p, a,b and B_z, respectively.
Step 6:
Divide the binary sequence B^x,y_I, p, a,b,z into segments of a predetermined length, m, and convert each segment into its decimal representation. Let S_d denotes the decimal representation of B^x,y_I, p, a,b,z. Therefore, S_d can be expressed as a sequence of decimal values δ_1, δ_2, ..., δ_t, where t = ⌊ℓ/m ⌋ .
Note that each of the decimal values in the sequence S_d falls within the range of [0, 2^m-1].
Step 7:
Repeat Step 2 through Step 6 for each k ∈{ 1, 2, …, n }.
Let Δ(I,p,a,b,x,y,z,n) be the resulting sequence of length n × t.
Step 8:
Choose three positive integers ϕ, ψ, and φ. The proposed IDPRNG is represented by the function
Ω : Δ→ [0, 2^m-1], where
Ω(δ_i) ≡ϕδ_i + ψδ_i+1 + φ2^m.
Thus, Ω(I,p, a,b,x,y,z,n,ϕ, ψ, φ) is the initial sequence of random numbers based on the parameters
I,p,a,b,x,y,z, n,ϕ, ψ, and φ. Henceforth, we represent it by Ω_I.
§.§ Fitness function
A fitness function is a tool used to measure how closely a given solution matches the ideal solution.
The proposed algorithm aims to find the best possible solution for a given problem.
The degree of uncertainty in a PRNS is often measured using information entropy (H), which is an important measure of randomness. For a PRNS to be considered effective, it must contain a high level of uncertainty. The higher the entropy value, the stronger the generator is considered to be.
Let Ω be a PRNS taking values from [0, 2^m-1].
The entropy H(Ω) of Ω is defined as:
H(Ω) = - ∑_i=1^2^mP(ω_i) log_2 P(ω_i),
where P(ω_i) represents the probability of an i-th element ω_i in Ω.
Apart from entropy, the period of a PRNS is also a significant factor in assessing its randomness. A PRNS with a long period is generally considered good for cryptographic purposes. The period of Ω, denoted by T(Ω), is the the least positive integer T for which ω_i + T = ω_i for all i ≥ 1. The case T(Ω)=ℓ(Ω) is optimal since then Ω is considered more secure.
To achieve our objective, a multi-objective optimization function is employed that seeks to maximize both the information entropy and the period of a pseudo-random number sequence Ω of length ℓ which is initially generated.
The purpose of this function is to generate PRNSs that have both maximum entropy and period, to obtain the best possible results.
Our optimization problem is based on the following fitness function.
Maximize
f(Ω) = H(Ω) + T(Ω),
where 0 ≤ H(Ω) ≤ m and 1 ≤ T(Ω) ≤ℓ.
§.§ Crossover
Let Ω_I = (ω_0, ω_1, ..., ω_2^m-1) be the set of pseudo-random numbers generated during the initialization phase. The goal of the crossover operator is to replace elements in Ω_I with those that result in a higher fitness value. In other words, elements with lower fitness values are replaced with those having comparatively higher fitness values. More concretely, the crossover operation is carried out as follows:
i) Let P_e be a random permutation of the integers in the range [1, 2^m].
ii)Let V = (v_1, v_2, ... , v_2^m) be a vector of 2^m elements randomly selected from Ω_I.
iii)Define Ω_C as the sequence obtained by replacing the v_i-th element of Ω_I with the i-th element of P_e, i.e.,
ω_i^C=ω_P_e(i) if i = v_j for some j
ω_i otherwise.
The purpose of this operation is to increase the diversity of the sequence and improve the chances of finding a globally optimal solution. Specifically, the crossover operation ensures that all integers in the range [0, 2^m-1] are present in Ω_C.
To evaluate the quality of the new sequence Ω_C, we compute its entropy H(Ω_C) and period T(Ω_C). If H(Ω_C) ≥ H(Ω_I) and T(Ω_C) ≥ T(Ω_I), then we consider Ω_C as input for the mutation operation. Otherwise, if H(Ω_C) < H(Ω_I) or T(Ω_C) < T(Ω_I), we use Ω_I as input to the mutation phase.
§.§ Mutation
Let Ω_M be a sequence of length ℓ obtained after crossover operation, and let r and r' be two integers randomly selected from the interval [1,ℓ]. The swapping mutation operator μ_s can be defined as:
μ_s(Ω_M,r,r') = Ω_M',
where Ω_M' is the sequence obtained after swapping the element at position r with the element at position r' in Ω_M.
Since the entropy of a sequence is not affected by the swapping operation, therefore in the mutation phase we only compute the period of the obtained sequence.
If T(Ω_M') ≥ T(Ω_M), then Ω_M' is selected for the next generation, otherwise, Ω_M is retained for the next generation.
§.§ Termination
The stopping criteria of any optimization algorithm depend primarily on two significant factors. The first factor is the number of generations or iterations, where if an algorithm attains the pre-defined number of generations, it terminates. The second factor is based on the optimal solution, where an algorithm terminates if it achieves the optimal solution to the fitness function. It is essential to note that the number of generations alone cannot serve as a feasible parameter to stop an algorithm since it may terminate without any improvement. As the objective is to generate highly random pseudo-random number sequences, therefore the proposed algorithm employs the optimal solution to the problem as the termination condition. In other words, the algorithm stops when the required sequence attains optimal entropy and optimal period, producing a highly random sequence that is well-suited for cryptographic applications.
Thus, the PRNS obtained after the termination phase represents the optimized sequence of pseudo-random numbers characterized by optimal entropy and period. We denote the optimized PRNS by Ω_Z.
§ SECURITY ANALYSIS AND COMPARISON
This section evaluates and compares the effectiveness of the ECGA through extensive experiments and security evaluations, which involve a variety of tests such as randomness analysis, entropy analysis, period analysis, Hurst exponent analysis, correlation analysis, key sensitivity analysis, and key space analysis. Additionally, we assess the efficiency of the ECGA in two different ways:
1) by comparing the initially generated sequences with their optimized sequences, and
2) by comparing the ECGA with the state-of-the-art
generators <cit.>.
For our experiments, we used MATLAB R2022b on a machine with an Intel Core m3-7Y30 @1.61 GHz and 8 GB of RAM. We generated 100 random sequences by selecting parameters at random, and the selected parameters are listed in <ref>. We used two elliptic curves recommended by NIST <cit.>, namely E_p,a,b and E_p',a',b' with prime numbers of 256 bits and 521 bits, respectively. The parameters for these elliptic curves are provided in <ref>. In addition, we used four different standard images with three distinct dimensions and five sets of integer-based triplets to generate these sequences. Our approach generates these 100 sequences by modifying one of the four parameters, namely (a) the elliptic curve, (b) the plain-image, (c) the size of the plain-image, and (d) the triplet (ϕ, ψ, φ) while keeping the others constant.
§.§ Randomness analysis
The NIST 800-22 test suite <cit.> is widely recognized as a suitable tool for evaluating the randomness of binary sequences. The suite consists of 15 tests and 174 sub-tests and requires, in general, at least 1 million bits to assess the randomness of a sequence. The suite calculates the probability of p_value for each sequence, and if p_value≥λ (or p_value < λ), the sequence is considered random (or non-random), where λ is a predefined threshold known as the significance level. For cryptographic purposes, λ is usually set between 0.001 and 0.01 <cit.>. Moreover, the proportion range of (1-λ) ± 3 √(λ (1-λ)/N) is considered acceptable, where N ≥ 1/ λ indicates the sample size (number of sequences). The NIST suite and its corresponding parameters are listed in <ref>, while a brief explanation of the suite can be found in <cit.>.
To ensure the randomness of the ECGA, we tested numerous optimized random sequences generated by our generator using all the tests in the NIST 800-22 test suite. For the experiments, we used a significance level λ = 0.01, a sample size N ∈{ 100,800 }, and a sequence of length n = 10^6 bits.
We converted 100 resultant sequences generated by the ECGA based on the randomly selected parameters listed
in <ref> into their binary representation for the NIST analysis. Each generated sequence values lie in the range of [0, 2^8-1], resulting in a total of (8 × 100× 10^6) bits for the evaluation of the NIST test suite.
We performed NIST tests on two data samples: firstly, by taking the first (1 × 10^8) bits from the total of (8 × 10^8) bits with N = 100, and then by taking the total (8 × 10^8) bits with N = 800. The results are presented
in <ref>.
The results indicate that for N=100, p_value≥ 0.01 for all tests except the block frequency test, for which p_value= 0.006, which is very close to the acceptable value of 0.01. Additionally, for N=100, the proportion of all the tests is greater than the lower bound of acceptable proportion, which is 0.96.
Moreover, for N=800, p_value≥ 0.01 and proportion≥ 0.97 for each test included in the NIST test suite, where 0.97 is the lower bound of the acceptable proportion for a sample size of 800. As a result, the ECGA passed all tests and is capable of generating highly random sequences.
We compared the results of the ECGA with the state-of-the-art generators <cit.> using the NIST test suite, the results are presented
in <ref>, <ref>, and <ref> and are summarized as follows.
1) The results listed in <ref> are based on 100 different random sequences each of length 10^6 bits. The ECGA passed all 100 sequences, attaining a proportion of 1 for six different tests. On the other hand, generators <cit.>, <cit.>, and <cit.> attained proportions of 1 for three, five, and two tests, respectively. The ECGA also satisfied the acceptable proportion value 0.96 for all the tests listed in <ref>. However, the generator <cit.> failed the Random Excursions Test (RET) with a proportion of 0.72. The main objective of RET is to analyze the occurrence of a specific number of visits in a cumulative sum random walk. The purpose of conducting this test is to assess whether the number of visits to a particular state within a cycle deviates from the expected frequency for a random sequence. The test comprises a total of eight individual tests, each focusing on one specific state: -4, -3, -2, -1, +1, +2, +3, and +4.
Thus, the ECGA not only passed all the tests but also attained the highest proportion for the maximum number of tests compared to the generators in <cit.>.
2) <ref> compares the results based on eight distinct sequences, each of length 10^6 bits. The ECGA passed the p_value criterion for each test, while generator <cit.> failed the Non-Overlapping Template Matching Test with a p_value of 0. Furthermore, our generator achieved p_value≥ 0.9 for five tests, compared to only one test for generator <cit.>. Hence, our generator outperforms <cit.>.
3) The comparison results of 30 random sequences with a total of (30 × 10^6) bits are illustrated
in <ref>. Out of 41 tests, the ECGA attained a proportion value of 1 for 37 tests, while
the generator <cit.> attained a proportion of 1 against 30 tests. Thus, the ECGA passed more tests with a proportion rate of 1 when compared with the generator <cit.>.
Thus, the ECGA performed better than other existing PRNGs based on the NIST statistical test suite.
The ECGA passed all the tests with the highest proportion rate for the maximum number of tests when compared to the generators <cit.>. Therefore, it can be concluded that the ECGA is suitable for various applications that require randomness and unpredictability.
§.§ Entropy analysis
The degree of uncertainty in a PRNG can be measured by its information
entropy <cit.>, which is typically expressed in bits.
A good PRNG should produce sequences with high entropy, meaning a greater degree of uncertainty.
In this study, 100 sequences of length 10^6 with values ranging from 0 to 2^8-1 were generated and their entropy was calculated. The optimal entropy denoted by H_max, in our case, is 8. The average entropy of the 100 sequences before optimization was between 6.7702 and 7.1084, with an average of 6.9305. However, after optimization, all 100 sequences achieved their maximum entropy of 8, as demonstrated in <ref> and <ref>. These findings indicate that the ECGA can be very useful for cryptographic purposes, as it significantly increased the entropy of the generated sequences. Furthermore, the results listed in <ref> demonstrates that the ECGA outperformed several
state-of-the-art PRNGs <cit.> in generating sequences with optimal entropy.
§.§ Period analysis
To ensure that a PRNG generates a sequence that is sufficiently random and has a long enough period for its intended use, it is crucial to perform the period test <cit.> on it.
The period is a significant attribute of a sequence generated by a PRNG, as it indicates the length of the shortest repeating cycle present in the sequence, if it exists. Specifically, it is the smallest positive integer T for which the k-th element of the sequence matches the (k+T)-th element for all k ≥ 0. If the period of a sequence is equal to its length, then it is the optimal period of the sequence denoted by T_max.
We evaluated the effectiveness of the ECGA by generating 100 random sequences, each containing 10^6 numbers, using the ECGA. We analyzed the strength of the sequences by measuring their period, both before and after optimization. Our goal is to determine how much the period increased after optimization. The results are presented
in <ref>. Our findings indicate that, before optimization, the sequences have periods ranging from 192 to 998712, while after optimization, all sequences have an optimal period of 10^6. This suggests that our optimization algorithm has significantly increased the period of the random sequences, making them suitable for cryptographic applications.
In other words, all the generated sequences have an optimal period and can be used for secure communication.
§.§ Hurst exponent
The Hurst exponent <cit.> is a statistical test, that determines the trend in data. It is denoted as H_E and falls between 0 and 1. There are three possible scenarios when calculating H_E:
1)
If H_E=0.5, the data is random or independent, meaning there is no correlation between the current and previous values.
2) If 0.5<H_E≤1, the data is persistent, meaning if there is an increasing trend in the values, the next values will likely follow the increasing trend.
3) If 0≤H_E<0.5, the data is anti-persistent, meaning if there is an increasing trend in the values, the next values will likely follow the decreasing trend.
A value closer to 0.5 indicates more random data. A good PRNG should have an H_E value close to 0.5. The rescaled range (R/S) analysis <cit.> is the most commonly used method to compute H_E.
We have calculated the Hurst exponent (H_E) using the (R/S) method for 100 sequences, each of length 10^6. The results are presented in <ref> and shown in <ref>. The results indicate that before optimization, H_E ranged from 0.0665 to 0.2900, while after optimization, it ranged from 0.4916 to 0.5410. This demonstrates that after optimization, all generated sequences have H_E values very close to the ideal value of 0.5, indicating that the sequences are highly random. We have also shown the Hurst plots of the first five sequences in <ref>.
We also compared the Hurst exponent of the ECGA with the state-of-the-art
generators <cit.>.
The results are presented in <ref>, which shows that the ECGA H_E values are closer to the ideal value of 0.5 compared to the
generators <cit.>. Thus, the ECGA can generate highly random sequences when compared with the generators described in <cit.>.
§.§ Correlation analysis
The correlation coefficient <cit.> is a crucial measure for determining the level of similarity between two random sequences of the same length. Given two random sequences, Ω={ω_j}_j=1^l and Ω'={ω_j' }_j=1^l, with length l, the correlation coefficient (R) is calculated using the following formula:
R(Ω, Ω') = ∑_j=1^l(ω_j-Ω)(ω_j'-Ω')√(∑_j=1^l(ω_j-Ω)^2)√(∑_j=1^l(ω_j'-Ω')^2).
Here, Ω and Ω' represent the mean of Ω and Ω', respectively.
The resulting R(Ω, Ω') is a value between -1 and 1. When R(Ω, Ω') is close to 0, the sequences are considered independent, whereas a value of 1 or -1 indicates a high level of dependency.
We calculated R(Ω_i, Ω_j') for 100 sequences we generated, where i,j ∈{ 1, 2, ..., 100 } and i ≠ j. From our experiments, we determined that the minimum and average values of the optimized generated sequences for all pairs of i,j excluding when i=j are 0 and 0.0638, respectively. Since the average value is very close to the ideal value of 0, we conclude that the ECGA can generate highly independent sequences.
§.§ Key sensitivity analysis
Key sensitivity analysis enables the study of how minor changes to the input parameters or initial conditions can cause changes in the resulting output.
If a PRNG has high sensitivity, even a slight change in the input can result in a significant difference in the output.
This implies that a PRNG should exhibit a high level of sensitivity, even at the single-bit level <cit.>.
To demonstrate the high sensitivity of the ECGA, we conducted an experiment in which we slightly altered the parameters of the ECGA.
For our purpose, we generated two sequences, Ω and Ω', which represent the original sequence and a slightly varied sequence, respectively, to analyze the sensitivity of the ECGA.
A sequence Ω is generated using: the parameters of EC E_p,a,b and the plain-image (a) which are defined
in <ref>, r × s = 256 × 256, and (ϕ, ψ, φ) = (25,73,121).
Subsequently, slight modifications to E_p,a,b, PI, r × s, and the triplet (ϕ,ψ,φ) resulted in the generation of four distinct sequences: Ω_E_p',a',b'', Ω_PI'', Ω_r' × s'', and Ω_(ϕ', ψ', φ')', respectively. The values of E_p',a',b' are listed in <ref>, while PI' represents the plain-image (b) shown in <ref>. The dimensions of r' × s' were set to 512 × 512, and the triplet (ϕ',ψ',φ') was set to (123,33,77).
We analyzed the sensitivity of the ECGA using three different methods:
1) by graphical representation;
2) by computing the number of bit change rate (NBCR); and
3) by computing the correlation coefficient.
§.§.§ Graphical representation
We analyzed the sensitivity of the ECGA by visually displaying both the original sequence Ω and a slightly altered version of it, denoted as Ω'.
The impact of various parameters is investigated and is presented in <ref>.
Specifically, <ref>(a) depicts the effects of varying the parameters of the triplet (ϕ, ψ, φ),
while <ref>(b) shows the impact of changing the parameters of EC.
<ref>(c) and <ref>(d) demonstrate the effects of modifying the parameters PI and the size r × s, respectively. As illustrated in <ref>, a slight modification in any of the parameters E_p',a',b', PI', r' × s', and (ϕ', ψ', φ') resulted in a distinct sequence Ω' that differed from the original sequence Ω.
Hence, it can be concluded that the ECGA is highly sensitive to input parameters.
§.§.§ Number of bit change rate
The number of bit change rate (NBCR) <cit.> is a common measure used to evaluate the sensitivity of a PRNG.
To calculate NBCR for two randomly generated sequences Ω and Ω', we use the following equation:
NBCR(Ω, Ω')= d_H(Ω,Ω')/n,
where, d_H denotes the Hamming distance between Ω and Ω', and n represents the total number of bits in either sequence. An ideal value of NBCR is 50%, which means that the closer the value of NBCR is to 50%, the more sensitive the algorithm is.
We have computed the NBCR of the original sequence Ω and a slightly changed sequence Ω' and the results are listed in <ref>. These results depict that the value of NBCR is very close to the optimal NBCR 50%, which indicates that the ECGA is highly sensitive to the input parameters and thus applicable for security applications.
We also examined the NBCR of the proposed sequences both before and after the optimization process.
The results, as shown in <ref>, indicate that NBCR values are within the intervals of [49.04, 51.19] and [49.99, 50.04] for the sequences before and after optimization, respectively. Additionally, we compared the NBCR of the ECGA with other PRNGs <cit.>, in terms of NBCR.
<ref> shows that the NBCR value of the ECGA is identical to the optimal value of 50%, while the NBCR of the other PRNGs is closer to 50%. Therefore, the ECGA is more sensitive to input parameters when compared
to the generators <cit.>.
§.§.§ Correlation coefficient
The sensitivity of the ECGA is also evaluated by calculating the correlation coefficient R between two sequences, Ω and Ω', where Ω represents the original sequence while Ω' represents a slightly modified version of the original sequence. To ensure high sensitivity, R(Ω, Ω') should be close to 0. We analyzed the results of R(Ω, Ω_i') where i belongs to the set E_p',a',b', PI', r' × s', (ϕ', ψ', φ'), and presented
in <ref>. The results show that R(Ω, Ω') values ranged between [0.0005, 0.1392] and [0.0004, 0.0017] before and after optimization, respectively. These results indicate that our designed generator is highly sensitive to its parameters, as both before and after optimization, the R values are very close to 0. Furthermore, there is a significant improvement in the R values after the optimization process.
Moreover, we conducted a comparison between the sensitivity of the ECGA and the state-of-the-art
generators <cit.>, using correlation coefficient as the metric. The outcomes of this comparison are shown in <ref>. The findings from the <ref> indicate that the ECGA is more sensitive than the PRNGs <cit.>.
§.§ Key space analysis
The evaluation of the security of a cryptographic algorithm is closely linked to the concept of key
space. When a PRNG is used for cryptographic purposes, it is essential to analyze its key space, which is the set of possible keys that can be used to generate a sequence. A larger key space makes the algorithm more resistant to exhaustive attacks, thereby improving its resistance to cryptanalysis.
To ensure the security of a cryptosystem, it is recommended that the key space should be at least
2^128 <cit.>.
The ECGA is based on several parameters, including I, E_p,a,b, B_z, ϕ, ψ, and φ, as described
in <ref>. The SHA-256 hash code of I, p, a, and b is also utilized. Additionally, the random sequence B_z has at least 256 bits, and the parameters ϕ, ψ, and φ range from 0 to 255.
As a result, the key space of the ECGA is at least 2^256 × 5.
In comparison with the recommended key space of 2^128, this is a very significant increase, indicating that the ECGA is capable of withstanding modern cryptanalysis due to its extremely large key space.
We conducted a comparative analysis of the key space of the ECGA with that of the existing state-of-the-art
generators <cit.>.
The findings of our analysis are presented in <ref>. The results indicate that the ECGA has a superior key space in comparison to the PRNGs developed in <cit.>.
§ CONCLUSION
We presented a novel method ECGA for the construction of an image-dependent pseudo-random number generator (IDPRNG) specifically for image-cryptographic applications.
We addressed the limitations of traditional PRNGs by using a multi-objective genetic algorithm (MOGA) optimization method and integrating elliptic curves into our approach.
The ECGA comprises two key phases. During the initial phase, we utilize pixels from the image along with the parameters of the elliptic curve to generate an initial sequence of random numbers.
During the second phase, a genetic algorithm is utilized to enhance the generated sequence by maximizing a fitness function that is based on both the information entropy and period of the pseudo-random sequence.
We conducted thorough experiments and security evaluations to assess the performance of the ECGA. These evaluations covered a wide range of tests, including randomness analysis, entropy analysis, period analysis, correlation analysis, Hurst exponent analysis, key sensitivity analysis, and key space analysis.
Furthermore, we have compared the ECGA with the existing state-of-the-art generators, from which it is evident that the ECGA:
exhibits superior performance relative to other state-of-the-art
generators <cit.> as per the NIST statistical test suite;
outperformed several
state-of-the-art generators <cit.> in generating sequences with optimal entropy;
produces better Hurst exponent results that are closer to the ideal value of 0.5 when compared with the
generators <cit.>;
is more sensitive to input parameters than the generators <cit.>; and
has a superior key space in comparison to the generators <cit.>.
Future work could explore further optimizations, evaluate the performance on large-scale data sets, and investigate the applicability of the IDPRNG in other domains requiring secure and unpredictable random number generation.
plainnat
|
http://arxiv.org/abs/2307.04056v2 | 20230708231953 | Manifold Filter-Combine Networks | [
"Joyce Chew",
"Edward De Brouwer",
"Smita Krishnaswamy",
"Deanna Needell",
"Michael Perlmutter"
] | stat.ML | [
"stat.ML",
"cs.LG",
"cs.NA",
"eess.SP",
"math.NA"
] |
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
[
====================================================================
We introduce a class of manifold neural networks (MNNs) that we call Manifold Filter-Combine Networks (MFCNs), that aims to further our understanding of MNNs, analogous to how the aggregate-combine framework helps with the understanding of graph neural networks (GNNs). This class includes a wide variety of subclasses that can be thought of as the manifold analog of various popular GNNs. We then consider a method, based on building a data-driven graph, for implementing such networks when one does not have global knowledge of the manifold, but merely has access to finitely many sample points. We provide sufficient conditions for the network to provably converge to its continuum limit as the number of sample points tends to infinity. Unlike previous work (which focused on specific graph constructions), our rate of convergence does not directly depend on the number of filters used. Moreover, it exhibits linear dependence on the depth of the network rather than the exponential dependence obtained previously. Additionally, we provide several examples of interesting subclasses of MFCNs and of the rates of convergence that are obtained under specific graph constructions.
§ INTRODUCTION
Geometric deep learning <cit.> is an emerging field that aims to extend the success of deep learning from data such as images, with a regular grid-like structure, to more irregular domains such as graphs and manifolds. As part of the rise of geometric deep learning, graph neural networks (GNNs) have rapidly emerged as an extremely active area of research in data science <cit.> and are also used in industrial applications such as Google Maps<cit.> and Amazon's product recommender system<cit.>. However, there has been much less work on the development of Manifold Neural Networks (MNNs) and much of the existing literature focuses on two-dimensional surfaces embedded in three-dimensional space <cit.>.
In this paper, we consider the more general setting of a compact, connected, d-dimensional Riemannian manifold ℳ embedded in D-dimensional space.
One of the principal challenges in extending deep learning to graphs and manifolds is developing a proper notion of convolution, which is non-trivial because there is no natural notion of translation. In the graph setting, a popular family of solutions, known as spectral methods, define convolution via the eigendecomposition of the graph Laplacian (or another suitable matrix). A limitation of this method is that explicitly computing eigendecompositions is expensive for large graphs. To overcome this obstacle, spectral graph neural networks such as ChebNet <cit.> and CayleyNet <cit.> define convolution in terms of polynomials of the graph Laplacian 𝐋=𝐃-𝐀. This leads to filters of the form h(𝐋)𝐱 where h is a polynomial and 𝐱 is a signal defined on the vertices of the graph.
With this notion of convolution, one may consider networks with layerwise update rules of the form:
𝐱^(ℓ+1)=σ(h^(ℓ)(𝐋)𝐱^(ℓ)),
where σ is a pointwise, nonlinear activation function.
If one is given multiple initial graph signals 𝐱_1,…, 𝐱_C organized into a data matrix 𝐗=(𝐱_1,…,𝐱_C) and uses multiple filters in each layer, then the layerwise update rule can be extended to
𝐱^(ℓ+1)_k=σ(∑_j=1^C h^(ℓ)_j,k(𝐋)𝐱^(ℓ)_k).
If one assumes that each filter h^ℓ_j,k belongs to a parameterized family of functions such as Chebyshev polynomials, one could then attempt to learn the optimal parameters from training data.
Inspired by this approach, Wang, Ruiz, and Ribeiro <cit.> have introduced manifold neural networks with layerwise update rules similar to (<ref>).
In particular, they assume that they are given C functions f_1,…,f_C:ℳ:→ℝ and utilize a layerwise update rule of
f^(ℓ+1)_k=σ(∑_j=1^C h^(ℓ)_j,k(ℒ)f^(ℓ)_k),
where ℒ=-div∘∇ is the Laplace-Beltrami operator, the natural analog of the graph Laplacian in the manifold setting. They then provide an analysis of the stability of such networks to absolute and relative perturbations of the Laplace-Beltrami operator.
However, many popular graph neural networks take an approach different than (<ref>). Rather than using multiple learnable filters for each input channel and then summing across channels, they instead filter each graph signal with a pre-designed operator (or operators) and then learn relationships between the filtered input signals. For example, the Graph Convolutional Network (GCN)[Here, we use the term GCN to refer to the specific network introduced in <cit.>. We will use the term GNN to refer to a general graph neural network] <cit.> performs a predesigned aggregation
𝐗→𝐀𝐗
where 𝐀=(𝐃+𝐈)^-1/2(𝐀+𝐈)(𝐃+𝐈)^-1/2 and utilizes a right-multiplication by a trainable weight matrix Θ to learn relationships between the channels. This leads to the layerwise update rule
𝐗^(ℓ+1)=σ(𝐀𝐗^(ℓ)Θ^(ℓ)),
where σ is as in (<ref>).[The matrix 𝐀 can be obtained by applying the polynomial h(λ)=1-λ/2 to a normalized version of the graph Laplacian and then some adjustments which help with the training of the network. Therefore, we can essentially think of the operation 𝐱→𝐀𝐱 as a spectral convolution.] This raises an intriguing question:
How should manifold neural networks be designed? Should they follow the lead of (<ref>) and (<ref>) and utilize multiple learnable filters for each input channel with a predesigned summation over channels or should they utilize predesigned filtering operations and incorporate learning via cross-feature operations analogous to (<ref>)?
It is likely that the answer to this question will vary depending on the dataset and the task of interest. Networks with multiple learnable filters for each channel are more general and will have greater expressive power. On the other hand, networks that, for example, use a common (either learnable or designed) filterbank shared across all channels are a more constrained family of networks. This constraint imposes a certain structure on the network and reduces the number of trainable parameters, which may provide a useful inductive bias in certain settings and may be particularly useful in low-data environments.
Another critical challenge in the development of manifold neural networks is that in many applications
one does not have global knowledge of the manifold. Instead, one is given a collection of points {x_j}_j=1^n in some high-dimensional Euclidean space ℝ^D and makes the modeling assumption that the points x_j lie on some d-dimensional manifold for d≪ D. This assumption, known as the manifold hypothesis, is frequently used in the analysis of biomedical data arising from, e.g., single-cell imaging <cit.>. This leads us to the following question:
How can one implement a manifold neural network when one does not have global knowledge of the manifold but only has access to finitely many sample points?
In order to help answer this question, several works such as <cit.> have used an approach based on Laplacian eigenmaps <cit.> (see also <cit.>) where one builds a data-driven graph 𝐆_n such that the eigenvectors and eigenvalues of the graph Laplacian approximate the eigenfunctions and eigenvalues of the Laplace-Beltrami Operator. They show that if the graph is constructed properly, then a graph neural network of the form (<ref>) will converge to a continuum limit of the form (<ref>) as the number of sample points, n, tends to infinity. However, these results are limited in the sense that (i) they assume specific graph constructions and (ii) their rates of convergence depend exponentially on the depth of the network.
In this work, we introduce a new framework for understanding MNNs that we call
Manifold Filter-Combine Networks. The manifold filter-combine paradigm is meant to parallel the aggregate-combine framework commonly considered in the GNN literature (see, e.g., <cit.>) and naturally leads one to consider many interesting classes of MNNs which may be thought of as the manifold counterparts of various popular GNNs. We then provide sufficient conditions for such networks to converge to a continuum limit as the number of sample points, n, tends to infinity. More specifically, the contributions of this work are:
* We introduce Manifold Filter-Combine Networks as a novel framework for understanding MNNs. This framework readily leads one to many interesting classes of MNNs such as the manifold equivalent of Kipf and Welling's GCN <cit.>, learnable variations of the manifold scattering transform <cit.>, and many others.
* In Theorem <ref>, we provide sufficient conditions for the individual filters used in an MNN to provably converge to a continuum limit as n→∞ if the filtering is done via a spectral approach. Here the rate of convergence depends on the rates at which the eigenvectors/eigenvalues of the graph Laplacian approximate the eigenfunctions/eigenvalues of the Laplace-Beltrami operator as well as the rate at which discrete inner products approximate continuum inner products.
* In Theorem <ref>, we prove that if the individual filters converge as n→∞, then so does the entire MNN. The rate of convergence will depend on (i) the rate of convergence of the individual filters; (ii) the weights used in the network; (iii) the depth of the network. Importantly, we note that our dependence on the depth of the network is linear, rather than the exponential dependence obtained in previous work. Additionally, our rate does not directly depend on the number of filters used per layer. We also note that Theorem <ref> does not assume that the filters have any particular form. Therefore, if one were to prove results analogous to Theorem <ref> for non-spectral filters, then Theorem <ref> would immediately imply the convergence of networks constructed from those filters.
* We then provide several corollaries to Theorem <ref>, which give concrete examples of our results in special cases of interest in Corollaries <ref>, <ref>, <ref>, and <ref>. These results may be summarized as follows:
* If the filters are implemented spectrally, then the discretization error of the entire MFCN tends to zero at a rate depending on how fast the eigenvalues/eigenvectors of the Laplacian corresponding to the data-driven graph 𝐆_n converge to the eigenvalues/eigenfunctions of the continuum Laplacian and how fast discrete inner products converge to continuum inner products.
* If 𝐆_𝐧 is constructed via a Gaussian kernel and the filters are implemented spectrally, then (up to log factors) the discretization error is 𝒪(n^-2/(d+6)).
* If 𝐆_𝐧 is constructed via a k-NN graph or an ϵ-graph and the filters are implemented spectrally, then (up to log factors) the discretization error is 𝒪(n^-1/(d+4)).
§.§ Notation
We let ℳ be a compact, connected, d-dimensional Riemannian manifold with normalized Riemannian volume form μ such that μ(ℳ)=1. We let 𝐋^2(ℳ) denote the set of functions that are square integrable with respect to μ and 𝒞(ℳ) denote the set of continuous functions on ℳ. We let ℒ=-div∘∇ denote the Laplace-Beltrami operator and let {ϕ_i}_i=1^∞ denote an orthonormal basis of eigenfunctions ℒϕ_i=λ_iϕ_i, with
0=λ_1<λ_2≤…. We will use these eigenfunctions to define Fourier coefficients denoted by f(i).
In much of our analysis, we will assume that ℳ is unknown and that we only have access to a function f∈𝒞(ℳ) evaluated at sample points {x_j}_j=1^n⊆ℝ^D. In this setting, we will let P_n:𝒞(ℳ)→ℝ^n be the normalized evaluation operator
(P_nf)(i)=1/√(n)f(x_i),
and let 𝐆_n denote a graph whose vertices are the sample points x_j. We will let 𝐋_n denote the graph Laplacian associated to 𝐆_n and let ϕ_i^n be an orthonormal basis of eigenvectors, 𝐋_nϕ_i^n=λ^n_iϕ_i^n, 0=λ^n_1≤λ^n_2≤…≤λ^n_n. Analogous to the continuous setting, we will use the ϕ_i^n to define discrete Fourier coefficients 𝐱(i).
In this paper, we consider a family of neural networks to process functions defined on ℳ. Towards this end, we will let F=(f_1,…,f_C) denote a row-vector valued function and let F^(ℓ) denote the hidden representation in the ℓ-th layer of our network, with F^(0)=F. When we approximate our network on 𝐆_n, we will instead assume that we are given an n× C data matrix 𝐗=(𝐱_1,…,𝐱_C).
§.§ Organization
The rest of this paper is organized as follows. In Section <ref>, we will provide an overview of spectral convolution on manifolds, explain how to implement such networks on point clouds, and state a theorem providing sufficient criteria for the discrete point-cloud implementation to converge to the continuum limit as the number of sample points tends to infinity. In Section <ref>, we introduce manifold-filter combine networks, discuss several examples of networks contained in our framework, and state a theorem showing that a discrete point cloud implementation converges to the continuum limit as well as several corollaries focusing on specific graph constructions. In Appendices <ref> and <ref>, we will prove the theorems stated in Sections <ref> and <ref>. We will conduct numerical experiments in Section <ref>, before providing a brief conclusion in Section <ref>.
§ SPECTRAL CONVOLUTION ON MANIFOLDS
As alluded to in the introduction, the extension of convolutional methods to the manifold setting is non-trivial because there is no natural notion of translation. Many possible solutions to this problem have been proposed including methods based on parallel transport <cit.>, local patches <cit.>, or Fréchet means <cit.>.
In this section, we will focus on spectral methods that rely on a generalized Fourier transform defined in terms of the eigendecomposition of the Laplace-Beltrami operator.
Let ℳ be a compact d-dimensional Riemannian manifold without boundary, and let ℒ be the Laplace-Beltrami operator on ℳ. It is well-known that ℒ has an orthonormal basis of eigenfunctions {ϕ_i}_i=1^∞ with ℒϕ_i=λ_iϕ_i, λ_i≥ 0. This implies that for f∈𝐋^2(ℳ), we may write
f=∑_i=1^∞f(i) ϕ_i,
where, for 1≤ i <∞, f(i) is the generalized Fourier coefficient defined by ⟨ f,ϕ_i⟩_𝐋^2(ℳ).
Motivated by the convolution theorem in real analysis, we will define manifold convolution as multiplication in the Fourier domain. In particular, give a bounded measurable function w:[0,∞)→ℝ, we define a spectral convolution operator, w(ℒ):𝐋^2(ℳ)→𝐋^2(ℳ) by
w(ℒ)f=∑_i=1^∞ w(λ_i) f(i) ϕ_i.
By Plancherel's theorem, we may observe that
w(ℒ)f_𝐋^2(ℳ)=(∑_i=1^∞ |w(λ_i)|^2|f(i)|^2)^1/2≤w_𝐋^∞([0,∞))f_𝐋^2(ℳ).
Additionally, we note that since these spectral convolution operators are defined in terms of a function w:[0,∞)→ℝ, one may verify that the w(ℒ) does not depend on the choice of the orthonormal basis {ϕ_i}_i=1^∞. (See for example Remark 1 of <cit.>.)
In our analysis of such filters, similar to <cit.> and <cit.>, we will assume that w is Lipschitz, and let A_Lip denote the smallest constant such that for all a,b∈[0,∞) we have
|w(a)-w(b)| ≤ A_Lip(w)|a-b|.
We will also assume that either f or w(ℒ) is bandlimited as defined below.
Let κ>0, let
f∈𝐋^2(ℳ), and let w(ℒ) be a spectral filter. We say that f is κ-bandlimited if f(i)=0 for all i>κ. Similarly, w(ℒ) is said to be κ-bandlimited if w(λ_i)=0 for all i>κ.
§.§ Implementation of Spectral Filters on Point Clouds
In many applications of interest, one does not know the manifold ℳ.
Instead, one is given access to finitely many sample points x_1,…,x_n∈ℝ^D and makes the modeling assumption that these sample points lie upon (or near) an unknown d-dimensional Riemannian manifold for some d≪ D. In this setup, it is non-trivial to actually implement a neural network since one does not have global knowledge of the manifold. Here, we will use an approach based on manifold learning <cit.> where we construct a data-driven graph 𝐆_n, whose vertices are the sample points x_1,…,x_n, and use the eigenvectors and eigenvalues of the graph Laplacian 𝐋_n to approximate the eigenfunctions and eigenvalues of the Laplace-Beltrami operator. As we will discuss below, there are numerous methods for constructing 𝐆_n including k-nn graphs, ϵ-graphs, and graphs derived from Gaussian kernels.
More specifically, we let {ϕ_i^n}_i=1^n be an orthonormal basis of eigenvectors,
𝐋_n ϕ_i^n = λ_i^n ϕ_i^n, 0=λ_1^n≤λ_2^n≤…λ_n^n, and analogous to (<ref>) we will write
𝐱=∑_i=1^n 𝐱(i) ϕ_i^n, 𝐱(i)=⟨𝐱,ϕ^n_i⟩_2
for 𝐱∈ℝ^n.
We then define a discrete approximation of w(ℒ) defined by
w(𝐋_n)𝐱=∑_i=1^∞ w(λ^n_i) 𝐱(i) ϕ^n_i.
Our hope is that if 𝐆_n is constructed properly,
then
w(𝐋_n)P_nf-P_nw(ℒ)f_2 will converge to zero as n tends to infinity, where P_n:𝒞(ℳ)→ℝ^n is the normalized evaluation operator defined as in (<ref>). Notably, in order to bound w(𝐋_n)P_nf-P_nw(ℒ)f_2 we must account for three sources of discretization error:
* The graph eigenvalue λ_i^n does not exactly equal the manifold eigenvalue λ_i. Intuitively, this should yield an error on the order of α_i,nA_Lip(w), where α_i,n=|λ_i-λ_i^n|.
* The graph eigenvector ϕ_i^n does not exactly equal P_nϕ_i, the discretization of the true continuum eigenfunction. One may anticipate this yielding errors of the order β_i,n, where β_i,n=ϕ_i^n-P_nϕ_i_2.
* The discrete Fourier coefficient 𝐱(i) is not exactly equal to f(i). Since Fourier coefficients are defined in terms of inner products, one expects this error to be controlled by a term γ_n which describes how much discrete inner products ⟨ P_n f,P_n g⟩_2 differ from continuum inner products ⟨ f,g⟩_𝐋^2(ℳ).
Combining these sources of error, and letting α_n=max_iα_i,n,β_n=max_iβ_i,n, one anticipates that if either f or w(ℒ) is κ bandlimited, then the total error will be 𝒪(κ(α_nA_Lip(w)+β_n+γ_n)). This intuition is formalized in the following theorem. For a proof, please see Appendix <ref>.
Let w:[0,∞)→ℝ, w_𝐋^∞([0,∞))≤ 1,
let f∈𝐋^2(ℳ) be a continuous function, and assume that either f or w(ℒ) is κ-bandlimited.
Assume that there exist sequences of real numbers {α_n}_n=1^∞, {β_n}_n=1^∞, {γ_n}_n=1^∞, with lim_n→∞α_n=lim_n→∞β_n=lim_n→∞γ_n=0, such that for all 1≤ i ≤κ and for n sufficiently large, we have
|λ_i-λ^n_i|≤α_n, P_nϕ_i-ϕ_i^n_2≤β_n,
|⟨ P_nf, P_ng ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| ≤γ_n^2fg_𝐋^∞(ℳ),
Then for n large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1, we have
w(𝐋_n)P_nf-P_nw(ℒ)f_2≤
C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)).
Furthermore, for all n large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1 and all 𝐱∈ℝ^n, we have
w(𝐋_n)𝐱-P_nw(ℒ)f_2≤𝐱-P_nf_2 +
C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)),
where, in both (<ref>) and (<ref>), C_ℳ is a constant depending on the geometry of ℳ.
In particular, if
𝐱=P_nf, (<ref>) implies that
lim_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2=0.
Inspecting the proof of Theorem <ref>, one may note that A_Lip(w) may actually be replaced by the Lipschitz constant on the smallest interval containing all λ_i and all λ_i^n, 1≤ i ≤κ, where λ_i≠λ_i^n. This means that, if f is bandlimited, our result may be applied to any continuously differentiable function w. Moreover, for most common graph constructions, we have λ_1=λ_1^n=0 and 0<λ_2,λ_2^n. This implies that our theorem can be applied to any w which is continuously differentiable on (0,∞) even if, for example, lim_t→ 0^+w'(t)=+∞ (which is the case for certain wavelets, such as those considered in <cit.>). Additionally, we note that with minor modifications, results similar to Theorem <ref> may be obtained for functions or filters which are approximately bandlimited in the sense that either sup_k>κ|w(λ_k)| or ∑_k>κ|f(k)|^2 are sufficiently small. In these cases, we will have
lim sup_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2 ≤sup_k>κ|w(λ_k)|f_𝐋^2(ℳ)
or lim sup_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2 ≤w_∞(∑_k>κ|f(k)|^2)^1/2. In particular, results similar to Theorem <ref> may be obtained for filters w_t(λ) e^-tλ, which correspond to the heat kernel.
In the following section, we will consider neural networks constructed from spectral filters and use Theorem <ref> to show that discrete approximations of such networks converge to their continuum limit as n→∞. However, first, we will consider several examples of graph constructions where estimates for α_n and β_n are known. In all of the examples below, we will assume that the data points x_i are generated i.i.d. uniformly at random (with respect to the normalized Riemannian volume form μ). In this setting, Lemma 5 of <cit.> implies that with probability at least 1 - 𝒪(1/n^9) we have
γ_n = (18log(n)/n)^1/4.
We note that in <cit.> the inequality (<ref>) was derived via Hoeffding's inequality which is why the definition of γ_n involves the ℓ^∞ norm of fg. However, if one were to use a different method, such as Bernstein's inequality to derive bounds for |⟨ P_nf, P_ng ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| in terms of other norms, then all of our proof techniques could likely be pushed through to obtain results similar to Theorem <ref>.
[Gaussian Kernels]
One simple way to construct a graph is with a Gaussian kernel.
Specifically, given a bandwidth parameter ϵ, we define a weighted adjacency matrix 𝐖_ϵ whose entries are given by
[𝐖_n,ϵ]_i,j = 1/nϵ^1 + d/2e^-𝐱_i - 𝐱_j_2^2 / ϵ
and let 𝐃_n,ϵ be the corresponding diagonal degree matrix. Then the associated graph Laplacian 𝐋_n,ϵ is
𝐋_n, ϵ = 𝐃_n, ϵ-𝐖_n, ϵ.
In this case, if ϵ∼ n^-2/(d+6), and the data points x_i are generated i.i.d. uniformly at random, then Theorem 5.4 of <cit.> implies that, under mild assumptions, we may choose
α_n = C_ℳ n^-2/d+6, β_n = C_ℳ n^-2/d+6√(log(n)),
with probability at least 1 - 𝒪(1/n^9)[For details on how to deduce (<ref>) from Theorem 5.4 of <cit.> we refer the reader to Remark 1 of <cit.> and the proof of Theorem 10 of <cit.>.].
Estimates such as these were used to analyze the convergence of the manifold scattering transform on Gaussian-kernel graphs in <cit.> and more general MNNs in <cit.> and <cit.>.
While constructing a graph from a kernel is simple, it has the drawback of producing dense graphs which pose computational issues for large values of n. Therefore, we also consider two methods for constructing sparse graphs that have previously been analyzed in works such as <cit.>
and <cit.>.
[ϵ-graphs]
Let ϵ>0, let η:[0,∞)→ [0,∞) be a nonincreasing function supported on the interval [0,1] such that η(1/2)>0 and the restriction of η to [0,1] is Lipschitz continuous.
A weighted ϵ-graph is constructed by placing an edge between all x_i,x_j such that |x_i-x_j|≤ϵ. Then, if x_i and x_j are connected by an edge, the corresponding entry in a weighted adjacency matrix is given by
[𝐖_n,ϵ]_i,j=η(|x_i-x_j|/ϵ).
The ϵ-graph Laplacian is then given by
𝐋=c_η/nϵ^d+2(𝐃_n,ϵ-𝐖_n,ϵ),
where c_η is the constant
c_η = ∫_ℝ^d |y_1|^2 η(|y|)dy,
and y_1 is the first coordinate of a vector y ∈ℝ^d, and 𝐃_n,ϵ is the weighted degree matrix corresponding to 𝐖_n,ϵ.
Theorems 2.4 and 2.7 of <cit.> show, for example, that if ϵ is chosen as ϵ∼ ( log(n)/n )^1/d+4, then, under mild assumptions, we may choose
α_n = C_ℳ ( log(n)/n )^1/d+4, β_n = C_ℳ ( log(n)/n )^1/d+4
with probability at least 1 - 𝒪(n^-9). Estimates similar to (<ref>) were used to analyze the convergence of MNNs on ϵ-graphs in <cit.> and <cit.>.
The graph Laplacians of ϵ-graphs are sparse by construction, and their sparsity is indirectly controlled by the length scale parameter ϵ. To directly control the sparsity of the graph Laplacian in an adaptive manner without specifying a length scale, one may also consider k-NN graphs.
[k-NN graphs]
For a positive integer k, symmetric k-Nearest Neighbor (k-NN) graphs are constructed by placing an edge between x_i and x_j if x_j is one of the k closest points to x_i (with respect to the Euclidean distance) or[One might also consider mutual k-NN graphs where we require x_i to be one of the k closest points to x_j and x_j to be one of the k-closest points to x_i. However, such graphs are not analyzed in the theorem we cite from <cit.>.] if x_i is one of the k closest points to x_j. Then, the edges can be given weights in a manner similar to <Ref>.
Formally, let ϵ_k(x_i) denote the distance from x_i to its k-th closest neighbor (with respect to Euclidean distance) and let r_k(x_i,x_j) max{ϵ_k(x_i),ϵ_k(x_j)}.
Then, if x_i and x_j are connected by an edge in the k-NN graph, the corresponding entry in a weighted adjacency matrix is given by
[𝐖_n,k]_i,j = η ( |x_i - x_j|/r_k(x_i,x_j) )
where η satisfies the same assumptions as in <Ref>. Note that if η(t) = χ_[0,1](t), then we obtain the standard unweighted k-NN graph. The k-NN graph Laplacian is then given by
𝐋_n,k=c_η/n(nc_d/k)^1+2/d(𝐃_n,k-𝐀_n,k),
where c_η is defined as in <Ref>, c_d is the volume of the d-dimensional Euclidean unit ball, 𝐖_n,k is the unweighted adjacency matrix associated with the k-NN graph, and 𝐃_n,k is the corresponding degree matrix. If η(t) = χ_[0,1](t), then c_η = c_d/d+2.
Theorems 2.5 and 2.9 of <cit.> show that, for example,
if k is chosen as k ∼log(n)^d/d+4 n^4/d+4, then, under mild assumptions, we may choose
α_n = C_ℳ ( log(n)/n )^1/d+4, β_n = C_ℳ ( log(n)/n )^1/d+4
with probability at least 1 - 𝒪(n^-9). Corollary <ref> stated in Section <ref> applies these estimates to establish the convergence of MFCNs for k-NN graphs. To the best of our knowledge, this is the first result to establish a quantitative rate of convergence for MNNs in this setting.
Comparing the examples above, we see that the rates of convergence are faster for dense graphs. Therefore, they may be preferable when n is only moderately large, but one still desires a good approximation of the continuum. However, for very large n, dense graphs become expensive to store in memory. Therefore, one might instead prefer to utilize either ϵ- or k-NN graphs. We also note that the theorems discussed above do not explicitly guarantee that P_nϕ_i≈ϕ_i^n. Instead, they show that P_nϕ_i≈±ϕ_i^n. However, as discussed earlier our spectral filters do not depend on the choice of orthonormal basis. Therefore, we may ignore this issue when applying Theorem <ref>.
§ MANIFOLD FILTER-COMBINE NETWORKS
In this section, we introduce a novel framework for thinking about manifold neural networks. We will refer to the networks we consider as Manifold Filter-Combine Networks paralleling the aggregate-combine framework commonly used in the graph setting (see, e.g., <cit.>). Here, we will use the term filter, rather than aggregate because our filters may be arbitrary linear operators on 𝐋^2(ℳ) (which in most examples will be defined in terms of some notion of convolution) and are not required to be localized averaging operations. Much of our analysis (except for Theorem <ref>) focuses on the case that the filtering step is implemented in the spectral domain. In this case, the class of all MFCN coincides with the class of MNNs considered in previous work such as <cit.>. However, even in the spectral case, we find that the filter-combine paradigm is a useful framework for thinking about MNNs since it naturally leads one to many interesting subclasses of networks and also allows us to obtain convergence rates that do not directly depend on the width of the network.
We will assume that our input data is a row-vector[We define the output of F to be ℝ^1× C in order to highlight the parallels with the data matrices commonly considered in the GNN literature where rows correspond to vertices and columns correspond to features.] valued function F∈𝐋^2(ℳ,ℝ^1× C), F=(f_1,…,f_C), where each f_i∈𝐋^2(ℳ).
Each hidden layer of the network will consist of the following five steps:
(i) filtering each input channel f_k by a family of linear operators W_j, 1≤ j≤ J, (ii) For each fixed j, we combine the filtered feature functions f̃_j,k=(W_jf_k) into new feature functions g_j,k where each g_j,k is a linear combination of the f̃_j,k, (iii) For each fixed k, we perform a cross-channel convolution that maps { g_j,k}_j=1^J to {g̃_j,k}_j=1^J' where each g̃_j,k is a linear combination of the g_j,k, (iv) apply some non-linear, nonexpansive pointwise activation function σ to each of the g̃_j,k, to obtain h_j,k=σ∘g̃_j,k, (v) reshape the collection of functions {h_i,j}_1≤ i ≤C̃,1≤ j≤ J' into {f'_i}_i=1^C', where C'=C̃J'.
In many applications, it may be sufficient to use a common filter bank {W_j}_1≤ j≤ J for all input channels. However, in other settings, it may be useful to give the network additional flexibility to learn different filters along different input signals. Therefore, for the sake of generality, we actually define the filtering step by f̃_j,k=(W_j,kf_k), where for each fixed k, {W_j,k}_1≤ j ≤ J is a collection of linear operators (i.e., filters) to be applied to the input channel f_k.
Explicitly, we define our layerwise update rule in the following manner. Let F^(0)=F, C_0=C and given F^(ℓ)=(f_1^(ℓ),…,f_C_ℓ^(ℓ)), we define F^(ℓ+1)=(f_1^(ℓ+1),…,f_C_ℓ+1^(ℓ+1)) via:
Filtering: f̃^(ℓ)_j,k=W^(ℓ)_j,kf^(ℓ)_k, 1≤ j ≤ J_ℓ, 1≤ k≤ C_ℓ
Combine:
g_j,k^(ℓ)=∑_i=1^C_ℓf̃^(ℓ)_j,iθ^(ℓ,j)_i,k, 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ
Cross-Channel Convolution: g̃_j,k= ∑_i=1^J_ℓα^(ℓ,k)_j,ig_i,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ
Activation:
h_j,k^(ℓ)=σ^(ℓ)∘g̃_j,k^(ℓ), 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ
Reshaping:
f^(ℓ+1)_(j-1)C_ℓ+k = h^(ℓ)_j,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ,
where C_ℓ+1=J'_ℓ C_ℓ', and the reshaping operator allows for multiple layers to be stacked upon each other.
Importantly, we note one may effectively omit the combine step by setting the matrix Θ^(ℓ,j)(θ_i,k^(ℓ,j))_1≤ i,k≤ C_ℓ equal to the identity matrix for each ℓ and j.
Similarly, one may omit the cross-channel convolutions by setting the matrices (α_j,i^(ℓ,k))_1≤ i,j≤ J_ℓ to the identity.
Additionally, we note that since we allow for the possibility of using different filters along each channel, it is, in general, possible to write the same network as an MFCN in more than one way.
For instance, if one fixes the cross channel convolutions equal to the identity, uses a shared filter bank {W^(ℓ)_j}_1≤ j ≤ J (independent of k) and chooses the combine step to be independent of j (i.e. θ_i,k^(ℓ,j)=θ_i,k^(ℓ)) then we have
f^(ℓ+1)_(j-1)C_ℓ+k = σ^(ℓ)(∑_i=1^C_ℓW^(ℓ)_jθ^(ℓ)_i,kf_i),
which may also be obtained by using filters of the form
W^(ℓ)_(j-1)C_ℓ+k,i=W_jθ^(ℓ)_i,k and using a combine step with θ̃_i,k^(ℓ,j)=1.
Therefore, the set of networks that may be obtained by setting θ_i,k^(ℓ,j)=1 is just as large as the set of all MFCN. A similar conclusion holds for the cross-channel convolutions. Therefore, in the case where all filters are implemented in the spectral domain, the class of MFCNs is actually the same as the class of MNNs considered in previous work such as <cit.> (see Example <ref> below).
However, as alluded to earlier, we find that thinking of the filtering, combination, and cross-channel convolutions steps separately is a useful framework for a couple of reasons. First, it facilitates our mathematical analysis of the convergence rate obtained in Corollary <ref> and in particular allows us to produce rates that depend only linearly on the depth of the network and do not directly depend on the network's width. Second, it highlights a variety of natural subclasses of networks that may be useful for various data sets or tasks of interest. For instance, each piece of the architecture can either be designed in advance or learned from data. Moreover, one may choose to use a common filter bank W_j, 1≤ j≤ J for all input functions and in all layers or one may choose to use different filters in each layer and/or for each signal. Below we will consider several examples of such classes, but first, we remark that our analysis does not depend on the order in which the steps are performed. Therefore, the theoretical guarantees obtained in Theorem <ref> and Corollary <ref> also apply, for example, to networks in which the cross-channel convolutions occur after the activation.
Additionally, we note that one may make different choices in each layer. For example, one may use a hand-crafted filter bank in the first several layers and then a learnable filter bank in the later layers. Similarly, the activation functions may vary from one layer to the next. However, we will often depress the dependence of the activation function on the layer and simply write σ in place of σ^(ℓ).
[Different Filters Along Each Channel]
If we set the cross-channel convolution equal to the identity, set C_ℓ'=1 and set θ_i,k^(ℓ,j)=1 then we obtain the layerwise update rule
f^(ℓ+1)_j=σ(∑_j=1^CW^(ℓ)_j,kf_k).
If each of the W_j,k^(ℓ)=w^(ℓ)_j,k(ℒ) is a spectral filter (as defined in Section <ref>), we then obtain the layerwise update rule
f^(ℓ+1)_j=σ(∑_j=1^Cw^(ℓ)_j,k(ℒ)f_k).
which was introduced in <cit.> and has been subsequently studied in <cit.>. Notably, in this example the reshaping operator is the identity (since C'_ℓ=1)) and the filters W_j,k^(ℓ) depend on both the layer ℓ and the input channel k.
As mentioned above (see the discussion surrounding (<ref>)), this class of networks is the most general and actually includes all MFCNs. However, considering, e.g., the filter and combine steps separately helps facilitate our analysis. For instance, our rate of convergence obtained in Theorem <ref> depends on max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), but unlike the results obtained in previous work does not directly depend on the width of the network.
In particular, if we set θ_i,k^(ℓ,j)=1/C_ℓ, then we have max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|)=1.
[Shared Filter Banks Along Each Channel]
In order to reduce the number of trainable parameters, it may be useful to utilize a (learned) filter bank which is shared across all input channels and a combination matrix which is shared across all filters. In this case, one obtains a layerwise update rule of the form (<ref>). Such networks may loosely be thought of as a low-rank subset of the more general networks discussed in Example <ref>. (In this setting, since the filter banks are learned, there is still no need for cross-channel convolutions.)
Due to the irregularity of the data geometry, many popular GNNs such as the GCN of Kipf and Welling <cit.> use predesigned aggregations and incorporate learning through the combine steps. The next example discusses the analog of such networks on manifolds.
[MCNs]
Set the cross-channel convolutions equal to the identity and let J=J'=1. Let A be a fixed operator which should be thought of as either a low-pass filter or a localized averaging operator, and set W^(ℓ)_i,1=A for all i. Let the matrix Θ^(ℓ) = (θ^(ℓ,1)_i,k)_1≤ i≤ C_ℓ,1≤ k ≤ C'_ℓ be a learnable weight matrix. Then our layerwise update rule becomes f_k^ℓ+1=∑_i=1^C_ℓAf_kθ_i,k^(ℓ,1) which may be written compactly as
F^(ℓ+1)=σ(AF^(ℓ)Θ^(ℓ)).
Therefore, we obtain a network similar to the GCN of Kipf and Welling which we refer to as the manifold convolutional network (MCN). Notably, A can be designed in a variety of ways, but one possible choice is to define it in the spectral domain where w is a non-increasing function such as an idealized low-pass filter w(λ)=1_λ≤ a or setting w(λ)=e^-tλ which corresponds to convolution against the heat kernel.
Additionally, one could consider the filter bank consisting of powers of A, i.e. W^(ℓ)_j=A^j, 1≤ j ≤ J, use a different combine matrix in each channel, and employ a simple cross-channel convolution by setting α_j,i^(ℓ,k)=1. In this case, one obtains a layerwise update rule of the form F^(ℓ+1)=σ(∑_j=1^JA^JF^(ℓ)Θ^(ℓ,j)), which can be thought of the manifold analog of the higher-order GCNs considered in work such as <cit.>.
Similar to the above example, one could also consider the manifold analogs of other popular spectral GNNs such as ChebNet<cit.> or CayleyNet<cit.>. Our framework also includes the manifold scattering transforms.
[Hand-Crafted Scattering Networks]
Let {W_j}_j=1^J be a predesigned collection of filters, which are thought of as wavelets and do not depend on the layer or the input channel. Set the combine and cross-channel convolutions
equal to the identity. One then obtains an entirely predesigned, multilayered network known as the manifold scattering transform. Such networks were considered in <cit.> in order to analyze the stability of and invariance properties of deep learning architectures defined on manifolds, building off of analogous work for Euclidean data <cit.> and graphs <cit.>.
[Learnable Scattering Networks]
For both Euclidean data and graphs, there have been a variety of papers that have introduced learning into the scattering framework.
In the Euclidean setting, <cit.> created a network that acts as a hybrid of the scattering transform and a CNN using predesigned, wavelet filter in some layers and learnable filters in others. Subsequent work by <cit.> introduced learning in a different way, incorporating cross-channel convolutions into an otherwise predesigned network. One may construct an analogous MFCN that corresponds to utilizing a predesigned filter bank {W_j}_j=1^J which is shared across all channels, setting the combine step equal to the identity, and letting α_j,i^(ℓ,k) be learnable. (Traditionally, scattering networks have used |·| as the activation function, but one could readily use other choices instead.)
In the graph setting, <cit.> incorporated learning into the scattering framework by utilizing using predesigned wavelet filters, but learnable combine matrices (along with a few other features to boost performance). In a different approach, <cit.> sought to relax the graph scattering transform by replacing dyadic scales 2^j with an increasing sequence of scales t_j which are learned from data via a selector matrix. To obtain an analogous MFCN, we set W_j=e^-jℒ for 0≤ j ≤ J, which diffuses the input signal over the manifold at different time-scales, corresponding to the diffusion module utilized in <cit.>. We then set the combination step equal to the identity and learn relationships between the diffusion scales via cross-channel convolutions (where the cross-channel convolutions utilized in <cit.> have a certain structure that encourages the network to behave in a wavelet-like manner). Additionally, as has previously been noted in <cit.>, these two forms of learnable geometric scattering are compatible and one could readily utilize learnable combine steps while also using cross-channel convolutions to learn relationships between diffusion scales.
Lastly, we also note that our framework includes simple multilayer perceptrons.
[Multilayer Perceptron]
If one sets J_ℓ=1 and sets both W_1,k^(ℓ) and the cross-channel convolution to be the identity operator then one obtains a simple dense layer that does not utilize the geometry of the manifold. In some sense, this is contrary to our goal of developing networks that utilize the manifold structure of the data. However, including some simple dense layers might nevertheless be useful for, for example, reducing the number of channels in the network.
§.§ Implementation from point clouds
As alluded to earlier, in many applications one does not have global knowledge of the manifold ℳ and merely has access to n data points {x_j}_j=1^n and evaluations of F at those data points. This leads us to recall the normalized evaluation operator
(P_nf)(j)=1/√(n)f(x_j) and
approximate
F by an n× C data matrix 𝐗=(𝐱_1,…,𝐱_C), where 𝐱_k=P_nf_k. One may then implement an approximation of the network via the discrete update rules.
Filtering: 𝐱̃^(ℓ)_j,k=𝐖^(ℓ)_j,k𝐱^(ℓ)_k, 1≤ j ≤ J_ℓ, 1≤ k≤ C_ℓ
Combine: 𝐲_j,k^(ℓ)=∑_i=1^C_ℓ𝐱̃^(ℓ)_j,iθ^(ℓ,j)_i,k, 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ
Cross-Channel Convolution: 𝐲̃^(ℓ)_j,k= ∑_i=1^J_ℓα^(ℓ,k)_j,i𝐲_i,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ
Activation: 𝐳_j,k^(ℓ)=σ∘𝐲̃_j,k^(ℓ), 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ
Reshaping: 𝐱^(ℓ+1)_(j-1)C_ℓ+k = 𝐳^(ℓ)_j,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ
where 𝐖_j,k^(ℓ) is a matrix which acts as a discrete approximation of W_j,k^(ℓ).
The following theorem shows that the discrete implementation will converge to its continuum counterpart in the sense that P_n F^(ℓ)≈𝐗^(ℓ) if the matrices 𝐖_j,k^(ℓ) are designed so that 𝐖_j,k^(ℓ)P_n f_k^(ℓ)≈ P_n W_j,kf_k^(ℓ). For a proof, please see Appendix <ref>.
Let f ∈𝒞(ℳ), and suppose that for all ℓ, there exists ϵ_ℓ>0 such that we have
P_nW_j,k^(ℓ)f_k^(ℓ)-𝐖^(ℓ)_j,k𝐱^(ℓ)_k_2 ≤𝐱^(ℓ)_k-P_nf_k^ℓ_2+ ϵ_ℓ,n
for all 1≤ k ≤ C_ℓ.
Let A_1^(ℓ)=max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), A_2^(ℓ)=max_j,k(∑_i=1^J_ℓ |α_j,i^(ℓ,k)|) and assume that σ is non-expansive, i.e. |σ(x)-σ(y)|≤ |x-y|.
Then,
𝐱_k^ℓ-P_nf_k^ℓ_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j)ϵ_i,n.
Notably, Theorem <ref> does not assume the filters are constructed in the spectral domain nor does it assume they have any particular form. It is a general result that shows that if individual filters converge, then so does the multilayer network. Moreover, if the weights α_j,i^(ℓ,k) and θ_j,i^(ℓ,j) are normalized so that the A_1^(j)=A_2^(j)=1, then the rate of the convergence is linear in the depth of the network. This is in contrast to previous results in <cit.> whose rate of convergence featured an explicit exponential dependence on the depth of the network. (A similar exponential dependence was also encountered in <cit.> where the limiting object is a graphon rather than a manifold.)
Combining Theorem <ref> with Theorem <ref> immediately leads to the following corollary which gives a quantitative rate of convergence for Manifold Filter-Combine Networks constructed utilizing spectral filters when either the filter or the input signals are bandlimited. Notably, if one proves theorems analogous to Theorem <ref> for other classes of filters (constructed either by spectral or not spectral methods)
such as the α-FDT filters considered in <cit.> or the closely related γ-FDT filters considered in <cit.>, then one may immediately obtain similar corollaries.[Such results were obtained for α-FDT filters with specific graph constructions in <cit.>.]
Assume that each W_j,k^(ℓ) is a spectral filter of the form W_j,k^(ℓ)=w_j,k^(ℓ)(ℒ) with w_j,k^(ℓ)_𝐋^∞([0,∞))≤ 1, and the matrices 𝐖_j,k are given by 𝐖_j,k^(ℓ)=w_j,k^(ℓ)(𝐋_n). As in Theorem <ref>,
let A_1^(ℓ)=max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), A_2^(ℓ)=max_j,k(∑_i=1^C_ℓ |α_j,i^(ℓ,k)|) and assume that σ is non-expansive, i.e. |σ(x)-σ(y)|≤ |x-y|.
Let A^(ℓ)_maxLip=max_j,k,A_Lip(w^(ℓ)_j,k).
Assume that there exist sequences of real numbers {α_n}_n=1^∞, {β_n}_n=1^∞, {γ_n}_n=1^∞, with lim_n→∞α_n=lim_n→∞β_n=lim_n→∞γ_n=0, such that
|λ_i-λ^n_i|≤α_n, P_nϕ_i-ϕ_i^n_2≤β_n,
|⟨ f, g ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| ≤γ_n^2fg_𝐋^∞(ℳ),
Assume n is large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1. Then, the error in each channel of the ℓ-th layer satisfies
𝐱_k^ℓ-P_nf_k^ℓ_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j)
C_ℳκmax_k'((A^(i)_maxLipα_n+β_n)f^(i)_k'_𝐋^2(ℳ)+γ_nf^(i)_k'_𝐋^∞(ℳ)).
In particular, if we assume that we have A_1^(j), A_2^(j), A^(i)_maxLip≤ 1, for all i and j we have
𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ((α_n+β_n)max_k',if^(i)_k'_𝐋^2(ℳ)+γ_nmax_k',if^(i)_k'_𝐋^∞(ℳ)).
In <Ref>, we provided several examples of α_n, β_n, and γ_n for three graph constructions. Using <Ref>, we immediately obtain the following three corollaries giving rates of convergence for each of these constructions.
Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven graph 𝐆_n constructed as in <Ref> with a Gaussian kernel. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies
𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ(√(log(n))/n^2/(d+6)max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)).
Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven ϵ-graph 𝐆_n constructed as in <Ref>. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies
𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ( ( log(n)/n )^1/d+4max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)).
Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven k-NN graph 𝐆_n constructed as in <Ref>. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies
𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ( ( log(n)/n )^1/d+4max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)).
§ NUMERICAL EXPERIMENTS
In this section, we compare the performance of three different examples of manifold filter-combine networks on the ModelNet dataset<cit.>. In particular, we focus on the MNN with different learnable filters in each channel (DLF), the MCN, and the manifold scattering transform (Scattering) discussed in Examples <ref>, <ref>, and <ref>. The code for reproducing our experiments is available at <https://github.com/KrishnaswamyLab/mfcn>.
§.§ Data
We used the ModelNet10 dataset which consists of three-dimensional point clouds sampled from various objects belonging to the classes bathtub, bed, chair, desk, dresser, monitor, nightstand, sofa, table, and toilet. Examples of point clouds in the dataset are given in Figure <ref>. For each point cloud, we preprocess the data by scaling the point coordinates (z-scaling), then randomly sample 100 points from the whole point cloud. We then create a graph via the constructions discussed in Examples <ref>, <ref>, and, <ref>, i.e., Gaussian kernels (dense), ϵ-graphs, and unweighted k-NN graphs. We use the x, y, and z coordinates of the nodes as input signals. The ModelNet10 dataset comes with a predefined training set (3901 samples) and test set (799 samples). In our experiments, we randomly select 20% of the training set to use for validation. We then consider two regimes. In the full data regime, we use the entire remaining 80% of for training. In the subset data regime, we randomly select 1000 samples from that 80% to use for training. We repeat this procedure five times and report our accuracies in the format mean ± std.
§.§ Models
In our experiments, we consider three manifold neural network architectures as described below. For each model, we used two layers of manifold networks, followed by a multi-layer perceptron classifier consisting of a single hidden layer. For further details of our hyperparameter settings and training procedures please see Table <ref> in Appendix <ref>.
Scattering
We follow the experimental procedure utilized in <cit.> and compute zeroth-, first-, and second-order scattering moments. More specifically,
for 0≤ j≤ J and 1≤ q≤ Q, we define first-order, q-th scattering moments by
Sf[j,q]∫_ℳ|W_jf(x)|^qdx=W_jf_𝐋^q(ℳ)^q,
where W_j are spectral wavelet filters corresponding to the functions w_j(λ)=e^2^j-1λ-e^2^jλ for 1≤ j≤ J and w_0(λ)=1-e^-λ.
We define second-order moments, for 0≤ j<j'≤ J, by
Sf[j,j',q]∫_ℳ|W_j'|W_jf(x)||^qdx=W_j'|W_jf|_𝐋^q(ℳ)^q.
Zeroth-order moments are defined simply by
Sf[q]∫_ℳ|f(x)|^qdx=f_𝐋^q(ℳ)^q.
In our experiments, we set J=8, Q=4 and use the first 20 eigenvalues and eigenvectors of the graph Laplacian to implement the spectral wavelet filters.
DLF We used two layers of DLF, where each layer consists of J_ℓ spectral filters (J_1=16, J_2=32). After applying the J_ℓ filters per input dimensions, we combined the channels by summation (i.e., θ^(ℓ,j)_i,k = 1). Similarly, as for scattering, we used the first 20 eigenvalues and eigenvectors of the Laplacian matrix to compute our filters. We used a ReLU activation and the identity map for the cross-channel convolution. We used average pooling at the last layer to obtain the feature vector to be processed by the classifier.
We considered two parameterizations of the filters w(λ), one denoted DLF-MLP, where we parametrize each w(λ) as a 2-layer MLP, and the other denoted DLF-POLY, in which we parameterize each w(λ) as a degree-four polynomial of e^-λ (which is the parameterization utilized in, e.g., <cit.>).
MCN We used two layers of graph convolutional networks with J_l (J_1=16, J_2=32) hidden dimension applied to the input graph with ReLU activations. As in <cit.>, our low-pass filter was implemented by 𝐀̂=(𝐃+𝐈)^-1/2(𝐀+𝐈)(𝐃+𝐈)^-1/2 which is equivalent to applying the spectral filter w(λ)=1-λ/2 to the normalized graph Laplacian and then utilizing a renormalization trick in order to facilitate the learning process. We used a ReLU activation and the identity map for the cross-channel convolution. We used average pooling at the last layer to obtain the feature vector to be processed by the classifier.
§.§ Results
We compared the performance of the different models and graph construction based on the classification accuracy on the left-out test set. In Table <ref>, we report the mean and standard deviation of the test accuracy across the five different splits (5-folds) for both the full and subset data regimes.
All of the models consistently perform much better than random chance (which is roughly 10% accuracy since there are ten classes) but are all far from 100% accuracy. In particular, in the full data regime, accuracy levels range from 54% to 75% and from 44% to 70% in the subset data regime.
Overall the two versions of DLF are the best performing methods, particularly on the Dense graphs and the Epsilon Graphs. We note that DLF-MLP outperforms DLF-POLY in four out of six cases, but has the drawback of requiring more parameters. On the k-NN graphs, MCN performs nearly as well as DLF, but is the least accurate method on the dense graph construction. Scattering is overall the lowest performing method. However, its performance is the least affected by the number of samples. For instance, on the dense graph construction, it loses four percentage points of accuracy compared to MCN and DLF which lose ten and nine points. This suggests that the wavelet filters are useful geometric descriptors, but that overly hand-crafted networks lack the flexibility to learn from data.
§ CONCLUSION
We have introduced a new framework for analyzing and implementing manifold neural networks that we call manifold filter-combine networks. This framework naturally allows us to think about many interesting classes of MNNs such as the manifold analogs of GCNs and several relaxed variations of the manifold scattering transform. Additionally, we have provided methods for implementing such networks when one does not have global knowledge of the manifold, but merely has access to n sample points, that converge provably to their continuum limit as n→∞. In order to establish this result, we also prove a theorem establishing sufficient convergence conditions for the individual filters used in the network. This result is not specific to any particular graph construction. Instead, it shows that if the eigenvectors and eigenvalues of the graph Laplacian converge (and additionally that discrete inner products converge to continuum inner products) then spectral filters constructed from the graph Laplacian will converge as well. This allows our results to be applied to a wide variety of graph constructions including those discussed in Examples <ref>, <ref>, and <ref>.
The flexibility of our setup is deliberate. The development of manifold neural networks is in its infancy, even compared to graph neural networks, and there are many questions about which networks will perform best in practice. Should networks use learnable filter banks similar to a CNN or predesigned averaging operations similar to a common aggregate-combine network?
Are cross-channel convolutions a viable way to introduce learning in settings where there are no nontrivial relations between input channels?
In this work, we do not claim to provide an answer to the question “what are the best ways to design a manifold neural network?" which ultimately will need to be answered through thorough experimentation. The purpose of this paper is instead to facilitate this experimentation by providing a useful framework for thinking about MNNs. We also note several other important areas of future work. (i) In examples <ref>, <ref>, and <ref>, we consider settings where the data points {x_i} lie exactly on the manifold and are sample i.i.d. uniformly at random. Relaxing these assumptions would greatly increase the applicability of our theory to noisy real-world data. (ii) Most of the data sets used in the MNN literature focus on two-dimensional surfaces. Developing challenging and relevant benchmarks for learning on higher-dimensional manifolds would help facilitate the experimental exploration of various MNN architectures.
§ ACKNOWLEDGEMENT
The authors thank Luana Ruiz for helpful discussion that greatly improved the quality of our exposition.
plain
§ THE PROOF OF THEOREM <REF>
We first note that if either w or f is κ bandlimited, we have
w(𝐋_n)P_nf-P_nw(ℒ)f_2
= ∑_i=1^κ w(λ_i^n)⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n - ∑_i=1^κ w(λ_i)⟨ f,ϕ_i⟩_ℳP_nϕ_i_2
≤ ∑_i=1^κ (w(λ_i^n)-w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2+∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n- ⟨ f,ϕ_i⟩_ℳP_nϕ_i)_2.
To bound the first term from (<ref>), we note that by the triangle inequality, the Cauchy-Schwarz inequality, and the assumption that n is large enough so that α_n≤ 1, we have
∑_i=1^κ (w(λ_i^n) - w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2
≤ max_1≤ i ≤κ |w(λ_i^n)- w(λ_i)| ∑_i=1^κP_n f_2 ϕ_i^n^2_2
≤ A_Lip(w)α_n ∑_i=1^κP_n f_2 ϕ_i^n^2_2
≤ A_Lip(w)κα_n P_n f_2
≤ A_Lip(w)κ(α_n f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)),
where we use the fact that ϕ_i^n_2^2=1 and that
P_nf_2≤(f_𝐋^2(ℳ)^2 + γ_n^2f_𝐋^∞(ℳ)^2)^1/2≤f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ).
Now, turning our attention to the second term from (<ref>), we have
∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n- ⟨ f,ϕ_i⟩_𝐋^2(ℳ)P_nϕ_i)_2
≤ ∑_i=1^κ w(λ_i)⟨ P_nf,ϕ_i^n⟩_2(ϕ_i^n-P_nϕ_i)_2
+∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_𝐋^2(ℳ)P_nϕ_i_2.
By the assumption (<ref>), we have ϕ_i^n-P_nϕ_i_2≤β_n.
Therefore, since w non-amplifying, we see
∑_i=1^κ w(λ_i)⟨ P_nf,ϕ_i^n⟩_2(ϕ_i^n-P_nϕ_i)_2
≤κmax_1≤ i≤κ |⟨ P_nf,ϕ_i^n⟩_2|ϕ_i^n-P_nϕ_i_2
≤κβ_nP_nf_2
≤κβ_n (f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ) ),
where the final inequality follows from (<ref>).
Meanwhile, the second term from (<ref>) can be bounded by
∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ)P_nϕ_i_2
≤ ∑_i=1^κ |w(λ_i)| |⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2
≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2
≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2|P_nϕ_i_2+∑_i = 1^κ |⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2.
By the Cauchy-Schwarz inequality, (<ref>), (<ref>), and the assumption that n is large enough so that β_n≤ 1, we have
|⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2| ≤ P_nf_2 ϕ_i^n-P_nϕ_i_2≤β_n(f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))≤(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)).
And also by (<ref>) we have
|⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2| ≤γ_n^2f_𝐋^∞(ℳ)ϕ_i_𝐋^∞(ℳ), and P_nϕ_i_2≤ 1+γ_nϕ_i_𝐋^∞(ℳ).
It is known (see, e.g., Appendix L of <cit.> and the references there) that ϕ_i_𝐋^∞(ℳ)≤ C_ℳ i^(d-1)/2d≤ C_ℳi^1/2. Therefore, for all i≤κ the assumption that n is large enough that γ_nκ^1/2≤ 1 implies
|⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2| ≤ C_ℳγ^2_nκ^1/2f_𝐋^∞(ℳ)≤ C_ℳγ_n, and P_nϕ_i_2≤ 1+γ_n κ^1/2≤ 2.
Therefore, if n is large enough such that γ_nκ^1/2<1, then the second term from (<ref>) can be bounded by
∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2 -⟨ f,ϕ_i⟩_ℳ)P_nϕ_i_2
≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2|P_nϕ_i_2 +∑_i=1^κ|⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2|P_nϕ_i_2
≤ ∑_i=1^κ(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))P_nϕ_i_2 +∑_i=1^κ C_ℳγ_nf_𝐋^∞(ℳ)P_nϕ_i_2
≤ C_ℳ(κ(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) + γ_nκf_𝐋^∞(ℳ))
≤ C_ℳκ( β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)).
Therefore, combining Equations (<ref>) through (<ref>) yields
w(𝐋_n)P_nf-P_nw(ℒ)f_2
≤ ∑_i=1^κ (w(λ_i^n) - w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2+∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n-⟨ f,ϕ_i⟩_ℳP_nϕ_i)_2
≤ A_Lip(w)κ (α_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))+ C_ℳ(κβ_n(f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) + γ_nκf_𝐋^∞(ℳ))
≤
C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))
thus completing the proof of (<ref>).
To prove (<ref>), we observe that since w_𝐋^∞([0,∞)), we have
w(𝐋_n)𝐱-w(𝐋_n)P_nf_2 ≤𝐱-P_nf_2
by the same reasoning as (<ref>).
Therefore, by the triangle inequality, we have
w(𝐋_n)𝐱-P_nw(ℒ)f_2
≤ w(𝐋_n)𝐱-w(𝐋_n)P_nf_2 +
w(𝐋_n)P_nf-P_nw(ℒ)f_2
≤ 𝐱-P_nf_2 +
C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))
as desired.
§ THE PROOF OF THEOREM <REF>
In order to prove Theorem <ref>, we need the following lemma which bounds the error in each step.
The errors induced by the non-filtering steps of our network may be bounded by
𝐲_j,k^(ℓ)-P_ng_j,k^(ℓ)_2
≤max_1≤ i≤ C_ℓ𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|,
𝐲̃_j,k^(ℓ)-P_ng̃_j,k^(ℓ)_2
≤max_1≤ i≤ J_ℓ𝐲^(ℓ)_j,k-P_n g^(ℓ)_j,k_2∑_i=1^J_ℓ |α_j,i^(ℓ,k)|.
𝐳^(ℓ)_j,k-P_nh^(ℓ)_j,k_2 ≤𝐲̃^(ℓ)_j,k-P_ng̃^(ℓ)_j,k_2
To verify (<ref>), we observe that
𝐲_j,k^(ℓ)-P_ng_j,k^(ℓ)_2
=∑_i=1^C_ℓ𝐱̃^(ℓ)_j,kθ_i,k^(ℓ,j)-P_nf̃^(ℓ)_j,kθ_i,k^(ℓ,j)_2
≤∑_i=1^C_ℓ|θ_i,k^(ℓ,j)|𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2
≤max_1≤ i≤ C_ℓ𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|.
The proof of (<ref>) is identical to the proof of (<ref>). For (<ref>), we see that
since σ is non-expansive we have
𝐳^(ℓ)_j,k-P_nh^(ℓ)_j,k^2_2
=∑_i=1^n|
(𝐳^(ℓ)_j,k)(i)-(P_nh^(ℓ)_j,k)(i)|^2
=∑_i=1^n|
(𝐳^(ℓ)_j,k)(i)-h^(ℓ)_j,k(x_i)|^2
=∑_i=1^n|
σ((𝐲̃^(ℓ)_j,k)(i))-σ(g̃^(ℓ)_j,k(x_i))|^2
≤∑_i=1^n|
(𝐲̃^(ℓ)_j,k)(i)-g̃^(ℓ)_j,k(x_i)|^2
=𝐲̃^(ℓ)_j,k-P_ng̃^(ℓ)_j,k^2_2.
It follows from the definition of the reshaping operator that
max_k𝐱_k^(ℓ+1)-P_nf_k^(ℓ+1)_2
= max_j,k𝐳^(ℓ)_p,r-P_nh^(ℓ)_p,r_2.
Therefore, by Lemma <ref> we have
max_k𝐱_k^(ℓ+1)-P_nf_k^(ℓ+1)_2
= max_j,k𝐳^(ℓ)_p,r-P_nh^(ℓ)_p,r_2.
≤ P_ng̃^(ℓ)_j,k-𝐲̃^(ℓ)_j,k_2.
≤ A^(ℓ)_2max_j,kP_n g^(ℓ)_j,k-𝐲^(ℓ)_j,k_2
≤ A^(ℓ)_2A^(ℓ)_1max_j,kP_n f̃^(ℓ)_j,k-𝐱̃^(ℓ)_j,k_2
≤ A^(ℓ)_2A^(ℓ)_1(max_k𝐱_k^(ℓ)-P_nf_k^(ℓ)_2
+ϵ_ℓ,n)
Since 𝐱_0^(ℓ)-P_nf^(0)_k_2=0 for all k,
we may use induction to conclude that
𝐱^(ℓ)_k-P_nf^(ℓ)_k_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j)ϵ_i,n.
§ TRAINING AND IMPLEMENTATION DETAILS
We trained all three models by minimizing the cross-entropy loss between predicted probabilities for each of the 10 categories and the ground truth category of each point cloud. We used the Adam optimizer for 200 epochs with a batch size of 32. The learning rate was selected according to validation performance and was chosen among 0.01 and 0.001. For each model, we used two layers of manifold networks, followed by a multi-layer perceptron classifier consisting of a single hidden layer. The hyper-parameters specific to each model and graph construction scheme are given in Table <ref>.
|
http://arxiv.org/abs/2307.04714v1 | 20230710172406 | Global solutions versus finite time blow-up for the fast diffusion equation with spatially inhomogeneous source | [
"Razvan Gabriel Iagar",
"Ariel Sánchez"
] | math.AP | [
"math.AP"
] |
0cm
2.5cm -1cm
1cm
theoremTheorem[section]
corollary[theorem]Corollary
lemma[theorem]Lemma
proposition[theorem]Proposition
definition[theorem]Definition
remark[theorem]Remark
*theorem*Theorem
*lemma*Lemma
*remark*Remark
*definition*Definition
*proposition*Proposition
*corollary*Corollary
equationsection
=====ł
=Ł
=ø
=Ø
=
=≪⟨
⟩
α
β̱
δ̣
ε
φ
γ
η
ıι
ȷψ
κ̨
łλ
μ
ν
øω
π
θ
ρ̊
σ
τ
ῠ
ξ
ζ
Δ
Φ
Γ
Ψ
ŁΛ
ØΩ
Π
Θ
Σ
Υ
Ξ
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
L
supp
∂
e
dist
6pt 500
-2ptto8ptwidth 6pt
|
http://arxiv.org/abs/2307.04241v1 | 20230709181416 | Light trapping using dimer of spherical nanoparticles based on titanium nitride for plasmonic solar cells | [
"Nowshin Akhtary",
"Ahmed Zubair"
] | physics.optics | [
"physics.optics",
"physics.app-ph"
] |
1 Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
*[email protected]
Light-trapping mechanisms with plasmonics are an excellent way to increase the efficiency of photovoltaics. Plasmonic dimer-shaped nanoparticles are effective in light absorption and scatterings, and there is hardly any research on dimer TiN nanoparticle-based PV. This paper demonstrated that titanium nitride could be a suitable substitute for other plasmonic materials in the visible and near-infrared spectrum. We designed a TiN-based spherical dimer plasmonic nanoparticle for photovoltaic applications. We conducted comparison analyses with the metals Ag, Au, and Al to ascertain the performance of TiN as a plasmonic material. Silicon had an average absorption power of ∼19%, and after incorporating TiN nanoparticles, the average absorbed power increased significantly to ∼75% over the whole spectral range. TiN dimer nanoparticle had the highest absorption cross-section, Q_ab value ∼6.2 W/m^2 greater than Ag, Au, and Al had a fraction of light scattered into the substrate value greater than Au, Al and comparable to Ag. TiN dimer exhibited better absorption enhancement, g for the whole spectral range than Ag, Au, and Al dimers for a radius of 15 nm with a peak value greater than 1. The maximum optical absorption efficiency of the plasmonic TiN nanostructures was ∼ 35.46%.
§ INTRODUCTION
Coal, natural gas, oil, biomass, and nuclear energy are non-renewable energy sources that are becoming scarcer and more depleted every day. Therefore, the abundance, affordability, and low environmental impact of renewable energy sources are all very advantageous. The efficiency of photovoltaic (PV) cells powered by renewable energy sources has been a great focus of research<cit.>. Light absorption and the frequency of electron-hole pair formation have a significant role in how well PV cells perform. The most common PV cells in CMOS technology are based on a silicon absorber layer with the limitation of high production costs and thinner absorption volume. There are many ways of increasing efficiency, such as surface texturing<cit.>, metal nanograting <cit.>, tandem structure <cit.>, optical absorption enhancement by increasing the effective optical path length or trapping light in the cell by introducing light scatters in the solar cell <cit.>. A suitable choice of materials for active layers can ensure better photon absorption, generating electron-hole pairs. However, only efficient absorption cannot generate efficient electron-hole pairs and, consequently, photo-voltage. The recombination process creates a loss of charge carriers; therefore, an optically thick semiconductor is unsuitable for better charge carrier separation. Additionally, more materials are needed for thicker semiconductors, which is cost-ineffective and wasteful. Thus, a thinner semiconductor layer is preferred.
Introducing metallic nanoparticles (NPs) has created an alternative approach to improving absorption efficiency. Surface plasmon resonance (SPR) can significantly enhance EM waves by placing plasmonic structures in the active layer. This phenomenon ensures enhanced light absorption providing strong scattering between the intense plasmon field and the active layer <cit.>. Localized surface plasmon resonances (LSPRs) generate light scattering by NPs. The LSPR occurs when the frequency of the optical photon coincides with the natural frequency of the collective vibration of conduction electrons in NPs, leading to strong near-field electromagnetic enhancement, acute spectral absorption, and scattering peaks <cit.>. The enhancement of optical absorption of NPs is the foremost attribute of LSPR <cit.>. Light absorption and photo-current have been improved by using the LSPR phenomena <cit.>. Much recent research has centered on the trade-off between optimal thickness and maximum field enhancement. The most significant improvement variables ordinarily happen when the junction between the absorber and NPs is illuminated by polarized light. Complex structures can achieve a sensitivity that leads to near-infrared (NIR) sensing and plasmon hybridization. The NPs structure can get over this restriction since it extends the inside field to the outside environment, which results in a considerable boost in detection sensitivity.
Plasmonic materials can support electrons or plasmons across a broad spectrum from infrared to ultraviolet solar radiation. Until recently, researchers were confined to noble metals like Ag and Au as plasmonic material. Ag and Au are frequently used plasmonic metals and optical metamaterials because of their strong DC conductivity or low resistive losses. While an electron in a metal's valence band absorbs a photon to jump to the Fermi surface or while an electron close to the Fermi surface absorbs a photon to fill the ensuing unoccupied conduction band, there is confinement for plasmonic metals, causing an excessive loss in conventional plasmonic materials. Ordinary metals have several drawbacks, including the size of the genuine portion of the permittivity, the ineffectiveness of tuning or balancing the optical properties of metals, and their high cost.
Due to the high optical loss of metals, alternative metals with the least ohmic loss may be preferred for plasmonic devices. To reduce the interband transition loss, many reports frequently utilized alternative plasmonic NP <cit.>. Conventional plasmonic materials have many shortcomings, leading researchers to seek better alternatives. The alternative plasmonics has the real permittivity of the same order. Hence, geometric fractions lights can be promptly tuned to coordinate the plan prerequisites. Conventional plasmonic metals confront debasement when exposed to air/oxygen or moisture, causing further problems in device fabrication and integration. These criteria directly affect optical properties and increase optical loss, resulting in more significant values of the dielectric function's imaginary part and rendering it incompatible with conventional silicon fabrication methods. Metal nitrides are a better alternative to overcome the shortcomings. Among them, titanium nitride (TiN) is a non-stoichiometric, interstitial compound with a high concentration of free carriers. It is refractory and steady, and its optical properties can be tuned by changing its geometric structure <cit.>. Moreover, it is consistent with silicon CMOS technology <cit.> and offers manufacturing and integration advantages that can help overcome the challenges. There are several reports on monomer spherical and hemispherical TiN NPs <cit.>. However, no dimer spherical TiN NP-based plasmonic solar cells have been reported.
This paper employed the finite-difference time-domain (FDTD) method to systematically investigate the scattering cross-section and absorption enhancement by spherical dimer TiN NPs for photovoltaic application.In order to ascertain the total scattering cross-section, the percentage of light scattered into the substrate, the absorption cross-section, and the spatial mapping of the electric field in this plasmonic nanosystem, we first built and optimized the dimer of the spherical NPs. We further investigated the effect of polarization sensitivity on the source. We gained insight into how the shape of the NPs enhanced the functionality of solar cells when the NPs were embedded into them. We investigated plasmonic core-shell configuration and analyzed the effect of dielectric coatings on NPs. Our work provided insights into using TiN in photovoltaic cells.
§ METHODOLOGY
§.§ Structural Design
We developed an alternative plasmonic material, TiN-based spherical dimer NP on a semi-infinite crystalline silicon substrate as can be seen in Fig. <ref> (see Fig.S1 of Supplement 1). In the visible and longer wavelengths, TiN displays localized surface plasmon phenomena and metallic characteristics. <cit.>. The plasmonic particles were separated from the semi-infinite silicon absorption layer by a thin Si_3N_4 layer as surface passivation. We compared cross-sections of NPs based on conventional noble plasmonic metal with TiN alternative plasmonic NPs. The size of the particles was varied, and their properties were analyzed. t_1 and t_2, respectively, presented the thin film and substrate thickness. The source's polarization angle represented by θ, r represented the radius of the sphere, and d represented the distance between the nanospheres of a dimer.
§.§ Simulation Methods
We applied the FDTD method, where Maxwell's equations were solved numerically, to study the mentioned nanosystems. The simulation dimensions of the FDTD were 1.2 μm × 1.2 μm × 1.25 μm. A mesh size of 0.4 nm was applied around the NPs. The source was adjusted for polarization perpendicular to the surface normal of the particles from the air side. The particles were incident to the total-field scattered-field (TFSF) plane wave along the negative z-axis. The incident source was a uniform wave with a prominent wavelength range of 550–1100 nm, which comprised the solar spectrum's highest feasible irradiance (AM 1.5). A plane wave with a TFSF was utilized to separate the incident field from the scattered field to examine the optical characteristics of NPs. The scattering characteristics were investigated using an external monitor. The spatial electric field mapping was performed by adjusting a frequency-domain power monitor. We used light scatterers to increase the light trapping efficiency, improving the absorber layer's absorption. We estimated the scattering and absorption cross-sections, and optimal values were obtained for better PV application by adjusting various factors. The electric and magnetic fields around the particle were calculated by converting the time domain into the frequency domain using a Fourier transform. The radial Poynting vector, S(ω), was calculated from the electric field, E(ω), and magnetic field H(ω) as a function of angular frequency, ω. In the scattered field region, the total of power P_s was determined along the +x, +y, +z, –x, –y, and –z directions. The ratio of the power in the scattered field region inside the substrate of the absorber layer to the power in the scattered field region in the air and the absorber layer is known as the percentage of light scattered into the substrate, f_sub. The total scattering cross-section, Q_T(ω) is defined as the sum of the power per unit area scattered in all directions divided by the power per unit area of the incident beam.
Q_T(ω)= P_s(ω)/I(ω).
Here, I(ω) is incident power intensity as a function of ω. Absorption cross-section is a measure of the probability of an absorption process. The total absorbed power divided by the power per unit area of the incident light was defined as absorption cross-section, Q_ab. It can be calculated from
dN/dz=-NnQ_ab
Here, dN/dz is the number of photons absorbed between the points z and z+dz, N is the number of photons penetrating to depth z, and n is the number of absorbing molecules per unit volume. The monitors outside the TFSF source determined the scattering cross-section. <cit.>.
We methodically considered the impact of NPs' Q_T, the light scattered into the substrate Q_sc, f_sub, and Q_ab by investigating their structural characteristics. We compared the effects of alternative plasmonic NPs to plasmonic metals and comprehensively considered the capacity to enhance absorption within an absorber layer by adding NPs. Moreover, to demonstrate the effectiveness of the dimer NPs in PV cells, we calculated the proposed structure's absorption enhancement and light absorption efficiency.
§ RESULTS AND DISCUSSION
§.§ Effect of different material-based plasmonic spherical dimer nanoparticle
We simulated spherical dimer NPs for different materials and tracked their scattering and absorbance behavior. Here, t_1, t_2, and r were regarded as 30 nm, 250 nm, and 100 nm, respectively. We evaluated the performance of NPs made of different materials, including Au, Ag, Al, and TiN. Moreover, we explored a core-shell configuration consisting of TiN NP with Si_3N_4 coating to maximize the performance. The foremost critical factor representing the path length enhancement of a scattering light-trapping structure is f_sub <cit.>. As can be seen in Fig. <ref> (a), the overall f_sub exhibited Ag> Au> TiN> Tin with Si_3N_4 coating> Al in this order. For 800 nm to 1100 nm wavelength, thef_sub of TiN NP was greater than those for Ag, Au, and Al-based NPs. When we varied the dimer materials, the peak value of Q_T was 19 W/m^2 for Au NP, and the values for Ag and Al were comparable to Au. TiN NP had comparable values of Q_T and Q_sc from 650 to 1000 nm wavelength. After adding Si_3N_4 coating on TiN NP the Q_T and Q_sc performed better for 850 to 1000 nm as can be seen Fig. <ref>(b)-(d). For scattering applications, Si_3N_4 coating can be used for TiN NP. Q_ab increased for TiN and TiN with Si_3N_4 coating NPs. The Q_ab was negligible for Ag, Au, and Al NPs.
When a plane wave collides with an object or scatterer, its energy is diverted in all directions. It is crucial to analyze the optical properties of NPs, including scattering cross-section and electric field distribution. By changing the spherical dimer NPs' material, shown in Fig. <ref> electric field maps in the xy plane were detected. LSPR modes can be produced by these structures. The plasmonic NP dimers are the equivalent of two atoms sharing electrons by bonding molecular orbitals. A dimer's excited dipoles on its two spheres may couple in the direction of the dimer axis, which is analogous to σ-type orbital for atoms or perpendicular to it, which is analogous to π-type orbital for atoms. Four additional plasmon modes emerge for dimers, which are homonuclear diatomic molecules equal to molecular hydrogen(H_2), nitrogen (N_2), oxygen (O_2), or a halogen (X_2)<cit.>. As shown in Fig. <ref>(f), when the charge of the dimers oscillates in the same direction, the charge accumulates, and electric field enhancement is observed. This phenomenon occurs both in bonding mode and anti-bonding mode, and they are in-phase antibonding with the highest energy and in-phase bonding with dipolar plasmon mode with the lowest energy (highest wavelength) (see Fig. <ref>(f)). Additionally, when the charge oscillates in different directions, there is no field enhancement, which is out-of-phase bonding and antibonding modes <cit.>. As can be seen in Figs. <ref>(d)-(e), the scattering spectra TiN and TiN with a dielectric Si_3N4 coating exhibited an unprecedented homogeneity for the two spheres. The in-phase bonding plasmon mode was observed for x-polarized light
The induced dipole moments resulted in two bright modes. Therefore, the accumulation of high free charge distribution around the surface and center of the dimer resulted in the enhancement of the electric field.
The quasistatic dipole approximation was used to compute the electric field (E) enhancement at yz and xy plane around the surface of Ag, Au, Al, and TiN dimers presented in Fig <ref>. Due to a lower real permittivity than Ag and Au, the magnitude of the magnetic field enhancement in TiN nanospheres was slightly smaller than those of Ag and Au. E-field intensity on the yz plane at x = 0, which is the center of the dimer, can be seen in Fig. <ref> for different material-based dimer nanosphere. For Ag and Au and TiN with Si_3N4 coating, charge distribution was high at the center of the dimers as presented in Figs. <ref>(a), (b) and (e). For Al and TiN, charge distribution at the center was less as compared to Ag and Au, as can be seen in Figs. <ref>(c) and (d). Here, the charge oscillates in the same direction, so the charge accumulated and filed enhancement occurred. Resulting LSPR modes were utilized in numerous detection applications <cit.>. The LSPR mode of TiN with Si_3N4 coating in the core-shell configuration was blued-shifted, and the peak value of the electric field increased compared to bare TiN dimer.
To determine the effectiveness of the NP in the photovoltaic cells, we calculated the absorbed power of each layer of nanostructure consisting of dimer spherical NPs on a 30 nm thin Si_3N_4 underlayer on Si substrate. NPs were comprised of Ag, Au, Al, and TiN. The divergence of the Poynting vector was used to compute the absorption per unit volume as given by,
p_abs=-1/2 real
(∇⃗ . S).
However, divergence calculations are frequently quite susceptible to numerical errors. Consequently, the simplest method for calculating absorbed power is,
p_abs=-1/2 real (iω E⃗. D^*).
It can be modified as
p_abs=-1/2 ω |E|^2 imag(ϵ).
Here, D is the electric displacement field, and ϵ is the permittivity. Standard 1.5 ATM solar spectrum and absorption of TiN NP-based solar cell are presented in Fig.<ref>(f). Solar light absorption is very efficient for wavelengths from 300 nm to 500 nm. For longer wavelengths than 500 nm, the absorption decreased gradually. Silicon had an average absorption power of ∼51% over the 400-500 nm range and ∼19% for the whole spectral range as presented in Fig. <ref>. The Si_3N4 layer enhanced the absorption for the 400-500 nm spectral range. After the incorporation of TiN NPs, the absorbed power increased significantly to ∼75% over the whole spectral range as presented in Fig. <ref>(d). In comparison to Ag, Au, and Al NPs, TiN NP integration had a substantially higher absorption power.
§.§ Absorption enhancement by TiN-based dimer NP
TThe absorption enhancement defines the increased absorption by the addition of NPs in the solar absorber layer. The absorption enhancement spectrum, g was presented by,
g(λ)=EQE_np(λ)/EQE_bs(λ).
Here, EQE_np is the device's external quantum efficiency when plasmonic nanoparticles were incorporated on top of a substrate and EQE_bs is the external quantum efficiency of a bare substrate. In this section t_1, and t_2 were taken to be 30 nm and 250 nm, respectively, in this section. We simulated the spectra of g for Ag, Au, Al, and TiN plasmonic dimers on a silicon substrate for r = 15 nm, 25 nm, and 30 nm presented in Figs. <ref>(a)-(c). TiN exhibited better absorption than Ag, Au, and Al dimers for r = 15 nm. For r = 25 nm the g of TiN was ∼1 for the whole spectral range which was better than Au and Al. For r = 25 nm, the g of TiN was between 0.9 to 1. For r = 30 nm the g of TiN was better than Al and comparable to Ag, Au for 400 to 800 nm and the g was greater than Ag, Au and Al for the range 800 nm to 1100 nm. For TiN, Al, Au, and Ag, the average enhancement, G, was discovered to equal 0.997, 0.991, 0.995, and 0.995, respectively.
The light absorption efficiency (LAE) was calculated by,
= ∫_400^1100 I(λ) A(λ) dλ/∫_400^1100 I(λ) dλ.
Here, I(λ) is the incident light intensity. And, absorbance, A(λ) was calculated by,
A (λ) = 1 - R (λ) - T (λ).
Here, T(λ) and R(λ) are the transmittance and reflectance of the structure. For the TiN plasmonic nanosphere on a kesterite substrate, we determined LAE. The values of LAE for r = 15 nm were found to be 35.46% and 33.78% for TiN and Ag respectively.
We calculated g for different radii of TiN plasmonic dimer on a silicon substrate as can be seen from Fig. <ref>(d). The g decreased with the increase of r for wavelength which agrees well with the previous study <cit.>.This happened as a result of plasmonic NP's strong forward scattering and weak absorption of light at various radii. While backward scattering prevented absorption, forward scattering promoted it. Spherical dimer plasmonic NPs with larger radii often have larger cross-sectional areas of scattering, this increase of metal NP and higher arrange mode excitation can control light scattering which improves or diminish the light absorbing productivity into the substrate <cit.>.
§.§ Impact of light polarization
As the polarization of light influences light scattering or absorption, we varied the polarization angle of the light source to compute the optical characteristics of the NPs <cit.>. For the structure design with optimized scattering and absorption, we changed the source's polarization angle for dimer spherical TiN NPs and found the optimal polarization angle, θ. Here, t_1 and r were regarded as being 30 nm and 100 nm, respectively. The theta had been varied from 0^∘ to 90^∘. For the wavelength ranging from 550 to 760 nm, the value of f_sub decreased with the increase of θ which is apparent from Fig. <ref>(a). With a reduction in θ, f_sub remarkably dropped for the wavelength range of 760 nm to 1100 nm. As the theta was raised, it was evident from Fig. <ref>(b) that Q_T increased significantly. The peak wavelength of Q_T had a red-shifted spectrum. As can be seen from Fig. <ref>(c), Q_sc increased as the θ was increased and from Fig. <ref>(d), Q_ab increased as the angle decreased from 90^∘ to 15^∘.
Figs. <ref>(a)–(f) represented E-field intensity for dimer NP with the variation of θ. The polarization angle defines the direction of the electric field and magnetic field. When the light was polarized in the x-direction (i.e., θ=0), field enhancement was observed along the x-axis. As the angle changed, the induced dipole changed with θ and consequently, in-phase bonding and antibonding plasmon modes occurred accordingly <cit.>. When θ=15 ^∘ and the charge of the dimers oscillates in the same direction the charge accumulated, and electric field enhancement was observed at θ=15 ^∘. This phenomenon occurred both in bonding mode and anti-bonding mode when they are in-phase plasmon mode. The direction of charge oscillation changed along the θ producing an induced dipole moment <cit.>. Fig <ref>(a)-(c) illustrated the strong electric field enhancement that was observed in the x-direction when charge oscillation was increasingly aligned to the x-direction. As the polarized angle rose from 45^∘ to 90 ^∘ strong electric field enhancement was observed in the y-direction as charge oscillation alignment changed to y direction and the in-phase antibonding mode was observed as illustrated in Fig<ref>(d)-(f).
E-field intensity on the yz plane represented the center of the dimers, can be seen from Figs. <ref>(a)-(f) with the variation of θ. When θ increased from 15^∘ to 45^∘, the effective induced dipole moment decreased, which originated the in-phase antibonding mode, due to the E-field polarization alignment changed toward y direction presented in Figs. <ref>(a)-(c). When θ increased from 60^∘ to 90^∘, the charge was distributed along the y-direction, which was the polarization angle of the E-field. Consequently, the out-of-phase antibonding mode was apparent when θ was near 90^∘, as can be seen in Figs. <ref>(d)-(f).
§.§ Impact of the distance between the spheres of dimer
We considered the impact of changing the distance between the spheres of TiN dimer NPs for the light source polarization angle at 0^∘ and 30^∘. We simulated dimer spherical NPs with various distances between the spheres for θ=0^∘ as can be seen from Fig. <ref>. We varied the distance, d from 0 nm to 50 nm to determine the structure for the optimized scattering cross-section. Here, t_1 and t_2 were considered 30 nm and 250 nm, respectively. The f_sub decreased with the increase of the d from d = 0 nm to 20 nm for 550 nm to 850 nm. When d = 50 nm the distance between the dimers increased, and they started to behave like a single sphere, As a result, f_sub increased. When d = 0 nm, the Q_T spectrum was the lowest and increased as d increased. The Q_sc decreased as d increased from 0 to 20 nm for the 550 nm to 820 nm range. For longer wavelengths than 820 nm, Q_sc increased with the increase of d. Q_ab was highest when d = 10 nm. For the wavelength 550 to 750 nm, the Q_ab decreased with the increase of d. For the dimers with d = 50 nm, as the distance increased, it started to behave like two independent monomer nanospheres.
We simulated a spherical dimer NP for 30^∘ the polarization angle of the source varying the distance between the spheres as can be seen in Fig. <ref>. Here, t_1 and t_2 were considered 30 nm and 250 nm, respectively. The f_sub and Q_sc were comparatively higher for the whole spectral range for d = 0 nm and lowest for d=20 nm. When d was smaller than 50 nm the f_sub and Q_sc increased with the decrease of d, and Q_T increased with the increase of d.
The Q_T was highest for d = 10 nm and performed better for the wavelength range 740 to 880 nm. Dimers performed better when there was no distance between the spheres. For the polarization angle from 0^∘ to 30^∘, the scattering spectra red-shifted.
§.§ Dependency on the radius of the dimer spherical NP
We simulated spherical dimer NPs with various radii and observed their optical characteristics, as seen from Fig. <ref>. To find the best structure for scattering cross-section, we varied the r from 50 nm to 120 nm. When the radius was 50 nm, Q_T was highest and the peak was almost 50 W/m^2.
The most important element to consider when modeling a light-trapping structure's increased path length is f_sub <cit.>. As the radius increased, the value of f_sub decreased significantly <cit.>. For the whole spectral range, f_sub did not vary appreciably at 50 nm and 70 nm radius. This made it possible to efficiently couple the part of the scattered light with a high in-plane wave vector that is transient in the air but engendered in silicon. As can be seen in Figs. <ref>(b)-(c), Q_T and the Q_sc decreased as the r increased from 50 nm to 120 nm. Q_ab exhibited the highest value for the whole spectral range for r = 100 nm and the peak was at 5.5 W/m^2 as can be seen from Figs. <ref>(d). Therefore 100 nm radius is optimum for dimers applications.
§ CONCLUSIONS
Dimer of spherical TiN NPs appeared as an efficient alternative plasmonic material for plasmonic and metamaterial applications.The results of our investigation into the effect of TiN dimer spherical NPs on the enhancement of the thin solar cell's light absorption were promising. After the incorporation of TiN NPs on a silicon substrate, the average absorbed power increased significantly from ∼19% to ∼75% over the whole spectral range. TiN exhibited better absorption enhancement, g and percentage absorbed power to Ag, Au, and Al dimers for r = 15. The average enhancement, G for TiN, Au, and Ag were found to be 0.9972, 0.9953. and 0.9954, respectively, for r = 15 nm. TiN dimer NP had the highest Q_ab value of ∼6.2 W/m^2 which were greater than Ag, Au, and Al. By changing the size of TiN dimer NPs, the absorption enhancement peak may be tailored to the required solar spectrum. TiN dimer NPs demonstrated to be beneficial when inserted in tandem solar cells because of their cost-effectiveness along with their abundance and ease to manufacture.
Funding
A.Z acknowledges the Basic Research Grant (Sonstha/R-60/Ref-4747) provided by the Bangladesh University of Engineering and Technology.
Acknowledgments
N.A. and A.Z. acknowledge the technical support of the Department of Electrical and Electronic Engineering at Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh, for the completion of the work.
Disclosures
The authors declare no conflict of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
Supplemental document
See Supplement 1 for supporting content.
|
http://arxiv.org/abs/2307.06177v1 | 20230712140412 | Smart Infrastructure: A Research Junction | [
"Manuel Hetzel",
"Hannes Reichert",
"Konrad Doll",
"Bernhard Sick"
] | cs.CV | [
"cs.CV"
] |
Smart Infrastructure: A Research Junction
Manuel Hetzel1,
Hannes Reichert1,
Konrad Doll1,
Bernhard Sick2
1University of Applied Sciences Aschaffenburg, Germany
Email: {manuel.hetzel, hannes.reichert, konrad.doll}@th-ab.de
2University of Kassel, Germany
Email: [email protected]
August 12, 2023
======================================================================================================================================================================================================================================================
Complex inner-city junctions are among the most critical traffic areas for injury and fatal accidents. The development of highly automated driving (HAD) systems struggles with the complex and hectic everyday life within those areas. Sensor-equipped smart infrastructures, which can communicate and cooperate with vehicles, are essential to enable a holistic scene understanding to resolve occlusions drivers and vehicle perception systems for themselves can not cover. We introduce an intelligent research infrastructure equipped with visual sensor technology, located at a public inner-city junction in Aschaffenburg, Germany. A multiple-view camera system monitors the traffic situation to perceive road users' behavior. Both motorized and non-motorized traffic is considered. The system is used for research in data generation, evaluating new HAD sensors systems, algorithms, and Artificial Intelligence (AI) training strategies using real-, synthetic- and augmented data. In addition, the junction features a highly accurate digital twin. Real-world data can be taken into the digital twin for simulation purposes and synthetic data generation.
§ INTRODUCTION
One of the most challenging locations for drivers and HAD systems are inner-city junctions. Extensive traffic density highly restricts the field of view (FOV) of drivers and vehicle-based perception systems. Towards reliable HAD systems, it is mandatory to investigate to what extent these restrictions can be compensated from a vehicle perspective. Intelligent junctions equipped with sensors are already used to cope with these restrictions cooperatively <cit.> <cit.> <cit.>. Moreover, one can use them to evaluate the perception capabilities of vehicle-only HAD systems. Carefully matched-up sensor positions, alongside empiric perception models can dissolve almost all occlusions, allowing a seamless scene understanding at complex junctions. Empirical approaches are necessary to satisfy these use cases. These topics are part of the cooperative research project AI Data Tooling <cit.>. The project will develop and investigate holistic tools and methods for providing data of different sensor modalities for AI-based functions. The aim is to develop a complete data solution for the training and validation of AI-based automated driving functions by integrating real data, synthetically generated data, and augmented data as a mixture of these two and methods for the efficient handling of this data set. The presented sensor setup facilitates a multi-view perception of traffic participants, with a broad area coverage among the junction. This publication focuses on installation, data flow, and research targets of intelligent infrastructure at a junction.
Moreover, it underlines the possibilities for future research within road safety and training strategies using a mixture of real-, synthetic- and augmented data. The publication is structured as follows: First, we review comparable research junctions in Sec. <ref>. Second, we discuss the requirements and analysis for developing the infrastructure perception system in Sec. <ref>. Next, the system architecture, data recording, and digital twin are described in Sec. <ref>, followed by challenges and research targets in Sec. <ref>. Finally, we summarize the current status in Sec. <ref>.
§ STATE OF THE ART
This chapter references other intelligent junctions used for vulnerable road users' (VRUs) safety and reviews the research work carried out on data acquired by those. Several environmental observing junctions have been proposed. However, the main focus of the majority of research is carried out on high-level traffic flow understanding <cit.>. In contrast, some research targets VRU safety topics, which require high-resolution sensing technologies. One of them was introduced in 2012 for the German Ko-PER project <cit.>. A combination of laser scanners and gray-scale cameras is used to monitor the traffic of the whole junction in general. In addition, a precise 90-degree stereo camera setup using two gray-scale full HD cameras has been used to detect and predict VRUs behavior crossing the street, focusing on one corner of the junction. Furthermore, this setup is used for the DeCoInt^2 <cit.> project with the research target of detecting intentions of VRUs based on collective intelligence, focusing on cyclists. The DeCoInt^2 project covered two major research areas: perception, and motion anticipation, both under the cooperative aspect between static mounted sensors and mobile research vehicles. For motion anticipation, Reitberger et al. provided a cooperative tracking algorithm for cyclists <cit.>, Bieshaar et al. used Convolutional Neural Networks to detect starting movements of cyclists <cit.>, and Zernetsch et al. developed a probabilistic VRU trajectory forecasting method <cit.>. Kress et al. used this sensor setup as a reference to evaluate a human keypoint detection model deployed to a mobile research vehicle <cit.>. It is worth mentioning that this sensor setup and the knowledge from the Ko-PER and DeCoInt^2 projects was utilized for the development of the novel proposed sensor setup.
A comparable junction is located in Braunschweig, Germany, serving as a field instrument for detecting and assessing traffic behavior. The junction can provide trajectory data of road users, acquired by multi-modal sensor setups. Mono cameras and radar are utilized for the 3D detection of vehicles. For VRU detection, multiple binocular stereo camera setups facing the pedestrian crossings are used. <cit.>
Since 2019 Continental operates two intelligent junctions in public use in Auburn Hills, Michigan. The systems are used to improve traffic flow, reduce pollution, and increase the junction’s safety by communicating hidden dangers to approaching connected vehicles and pedestrians. Camera and radar sensors are used to create an environment model providing information about road users, traffic infrastructure, static objects to connected vehicles using infrastructure-to-everything (I2X) communication <cit.>.
§ REQUIREMENTS AND ANALYSIS
The requirements for the original Ko-PER junction introduced in 2012 are derived from intensive analysis of accident scenarios which occur at junctions <cit.>. The analysis is based on the German In-Depth Accident Study (GIDAS) database <cit.>. It contains more than 20,000 registered accidents in the area of Hannover and Dresden since the year 1999. According to GIDAS, Goldhammer et al. clustered a total of 29 types of relevant intersection accidents, focusing on pedestrians, into five scenarios, covering 74.8% of severe and lethal accidents. Accident scenarios are complex due to many different influences like the quickly changing number and variety of road users, complex intersection layout, speed ranges, and different directions from which traffic may approach. 71 % of all accidents, including pedestrians and 58 % of all accidents including cyclists in Europe, happen inside urban areas, mostly at intersections and crosswalks <cit.>. Furthermore, over 60 % of all fatalities occurring at junctions are of VRUs. We used these results as a baseline for our requirement analysis. According to GIDAS inter-vehicle and vehicle to pedestrian accident rates at junctions are continuously decreasing, from 43% in 2010 to 33% in 2020. This development illustrates the impact of improved driver assistance systems. In contrast, vehicle to bicycle, vehicle to "others", and "inter-VRU" accidents increased, from 42% in 1999 to 67% in 2020. The group "others" represent road users like electric bikes and scooters, still belonging to VRUs. Based upon this development and in contrast to Ko-PER or DeCoInt^2, we are focusing on VRUs in general (i.e. pedestrians, cyclists and "others").
For accurate VRU detection and motion anticipation, high-resolution image-based sensors are essential. For determining positions of VRUs, seamless stereoscopic coverage of the VRU relevant areas at the junction is necessary. To further minimize occlusions, the sensors need to be mounted several meters above street level.
In addition, seasonal and weather conditions might be challenging for image-based sensors. Thus, we need to monitor the weather and analyze the impact of weather effects on image-based perception.
For AI-based perception and prediction methods, so-called corner cases that are critical and rarely occurring are essential. Corner cases can efficiently be generated using simulations. A precise environmental model, including textures and material properties, is required to simulate situations and scenarios at the junction as accurately as possible.
A suitable image acquisition frame rate is required to deal with a wide range of possible velocities for both motorized and non-motorized road users. We aim to maximize the frame rate while keeping both amount of data and computational load manageable.
§ SYSTEM DESCRIPTION
The junction consists of the main road with five lanes and a daily traffic volume of 30,000 vehicles. Fig. <ref> gives a schematic overview of the junction's topology. The main road has two straight-ahead lanes at the junction area and a separate left-turn lane for each direction. The more minor approaches have one lane per direction and a left-turn lane on one side. There are three traffic light-controlled crosswalks and a bicycle lane along the main road, which VRUs highly frequent, due to proximity to a university. The four corners of the junction show different occlusions by roadside structures and parking cars. Thus, the road users' FOV in many common traffic scenarios is limited.
§.§ Sensor Setup
In advance of installing the multi-camera network setup, the FOV of all sensors was simulated for different positions and alignments. The sensor setup was adjusted to achieve a best-case stereo coverage of the inner junction area, including the three pedestrian crosswalks, a bicycle lane, and a sensor coverage up to 100 meters into the junction approaches. Elevated mounting positions, up to eight meters, alongside overlapping FOVs, dissolve occlusions that may appear. All sensors are mounted to existing infrastructural light and signal poles. The optical sensor network consists of six identical color CMOS cameras with an ultra-high-resolution of 4096x2160 pixels. Each camera is equipped with a 71-degree horizontal aperture angle lens, operates at a fixed 25 Hz acquisition rate, and uses a five GigE interface to submit its acquisition data. Every camera is placed within a weather-resistance case, including a temperature-controlled heating and cooling system for all-season commitment. The transmission of sensor data is done via five and ten Gigabit Ethernet uplinks using fiber and copper cables with lengths of 80 meters. The cameras are aligned to achieve multiple 45- and 90-degrees stereo setups. In total, seven stereo systems are used. The complete stereo FOV is illustrated in Fig. <ref>. Cameras 1-3 focus on the section between pedestrian crosswalks one and three. Cameras 4-6 focus on the section between pedestrian crosswalks two and three. In addition, cameras one, five, and six covers the three connected approaches corresponding to the pedestrian crosswalks, whereas cameras two, three, and four each cover two approaches by reduced attention.
In addition to the image-based sensor setup, a meteorological station is used to provide weather information as supplementary context. The station consists of two sensors placed in proximity to the junction. One sensor is responsible for measuring environmental parameters and is located on a building's roof, next to the junction. A second sensor is used to track the current visibility at the junction. Thus it is placed at a light pole next to the junction center ten meters above road level. Both sensors are shown in Fig. <ref>.
§.§ System Architecture
To handle the amount of data six ultra-high-resolution color cameras provide, a custom build hardware- and software stack is required to maintain real-time data recording and processing in a single system. By using 25 Hz, each camera transmits 1.77 Gigabits of data, resulting in a total amount of 10.62 Gigabits of processing data from all cameras. Fig. <ref> illustrates the complete schematic system architecture with the sensor setup described in the previous section. The data processing system consists of a 64-Core processor with 256 GB of RAM, three GPUs, four TB of PCIe 4.0 NVMe storage, and ten high-speed Ethernet ports. The GPUs are necessary for high-speed image data encoding. On the CPU side, our software stack makes heavy use of multithreading for simultaneous data handling. In addition to simple sensor data recording, the system can serve real-time demonstrations, including 3D perception and VRU motion anticipation.
For highly precise synchronization, the cameras are triggered by GPS timestamps. Simultaneously, dedicated UTC timestamps are sent to the data processing system associated with the sensor data by CAN-Bus. A storage server with 576 TB of capacity is connected to the data processing system to store more extensive data sets. The Robot operation system (ROS) manages the complete data handling within the processing system. It enables the establishment of a flexible node-based data processing pipeline using a publisher-subscriber architecture.
§.§ Data Recording
As described in the previous subsection, the cameras provide a data stream of more than 10 Gigabits. To ensure a continuous and seamless data recording, GPU fixed encoding hardware functionalities are used. Each GPU can processes two camera streams simultaneously. The fixed-function unit encodes the camera's raw data into the lossless compressed H.264/MPEG-4 AVC format <cit.>. Using the H.264 compression algorithm, we can reduce the amount of data by a factor of eight to ten on average, depending on the current junction traffic volume. The recording node itself can subscribe to a user-defined number of cameras. GPU resources are automatically managed. Within a recording session, the camera images are uploaded into the GPU memory and passed to the hardware-accelerated encoding unit. Afterward, the resulting H.264 binary stream is stored on disk. A synchronization file is created to keep the UTC timestamps.
§.§ Meta Data
Besides the raw data recording capabilities, the system performs several post-processing tasks to create a wide range of metadata for additional research topics. We are using Detectron2 for state-of-the-art object detection, segmentation and human pose extraction <cit.>. In addition, triangulation is applied for each stereo-system to determine 3D coordinates for all detected VRUs. By merging all seven stereo-systems, we maintain a complete 3D perception of the junction's critical areas, as illustrated in Fig. <ref>. This allows us to track objects in real-world coordinates. Furthermore, the system can estimate 3D human body poses, as introduced by Open Pose <cit.>. The H.264 encoding mentioned in subsection <ref> can be used to extract optical flow, which is a powerful input feature for the task of human motion anticipation in general, as shown by Carreira and Zisserman <cit.> and for VRU motion anticipation in particular, as shown by Zernetsch <cit.>.
§.§ Digital Twin
A digital clone of the junction is created by a combination of photogrammetry and road-level laser scans. A drone carrying a specific high-resolution camera is used to receive the visual scan of the junction, supported by measurement vehicles to scan the complete ground area and facades. Both methods achieve a highly accurate digital model with a better than 1 cm textural resolution and a 3 cm or better structural resolution. The model can be utilized in simulation environments, increasing the junction's research capabilities in synthetic data creation and applications.
§ RESEARCH TARGETS AND CHALLENGES
Based on the data collection capability enabled by the junction, we want to envision future research directions. Unlike ground-based vehicles, our system can sense objects of interest without occlusion. For HAD vehicles, perception systems have to be safe and reliable. Due to the high-quality data and its multiple perspectives, our sensor setup can serve as a reference system for evaluating and safeguarding vehicular HAD systems. That can be done in the form of a test site or by utilizing the data to train vehicular HAD systems by transferring labels from the reference system.
Furthermore, the digital twin enables data generation and evaluation by simulation. We want to collect data, analyze critical scenarios, define evaluation metrics, develop methods for meta-data acquisition, and provide a simulation environment for the digital twin.
The meta-data provides an interface between the real world and its digital twin, e. g. human body poses and trajectories can be used to animate pedestrians and cyclists within a simulation environment. Both synthetic and real data allow us to analyze the quality requirements of synthetic data and how well the mixture of real and synthetic data works for use cases like object detection and motion anticipation. Using synthetic data alongside real data is promising, as it allows us to teach HAD systems even those scenarios that rarely occur.
For an accurate sensor system, several challenges must still be overcome. One is a continuous calibration of the cameras to compensate for drifts in position and orientation caused by temperature and mechanical vibrations. The second is maintaining suitable perception models within the presented system regarding changes in seasons, traffic patterns, and urban mobility. For example, electric bicycles change the way human car drivers interact with cyclists within a few years.
§ CONCLUSIONS
This paper presented a sensor network operating at a complex public junction as part of the AI Data Tooling project, including time-associated data acquisition and data processing. Moreover, we highlighted the research capabilities of our system and defined future research targets. The dedicated setup of ultra-high-resolution cameras enables a highly accurate perception of all road users within the inner junction area. It will be used as a reference for vehicle sensor evaluation and simulation, data creation, and analyzing AI training strategies using real- synthetic- and augmented data.
§ ACKNOWLEDGMENT
This work results from the KI Data Tooling supported by the Federal Ministry for Economic Affairs and Energy (BMWi), grant numbers, 19A20001L and 19A20001O.
ieeetr
|
http://arxiv.org/abs/2307.07239v1 | 20230714092242 | Probing new physics with polarized $τ$ and $Λ_c$ in quasielastic $ν_τ\!+\!n\!\to\! τ^-\!+\!Λ_c$ scattering process | [
"Ya-Ru Kong",
"Li-Fen Lai",
"Xin-Qiang Li",
"Xin-Shuai Yan",
"Ya-Dong Yang"
] | hep-ph | [
"hep-ph"
] |
[email protected]
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE),
Central China Normal University, Wuhan, Hubei 430079, China
[email protected]
School of Physics and Electronic Information, Shangrao Normal University, Shangrao 334001, China
[email protected]
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE),
Central China Normal University, Wuhan, Hubei 430079, China
Center for High Energy Physics, Peking University, Beijing 100871, China
[email protected] (Corresponding author)
Institute of Particle and Nuclear Physics, Henan Normal University, Xinxiang, Henan 453007, China
[email protected]
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE),
Central China Normal University, Wuhan, Hubei 430079, China
Institute of Particle and Nuclear Physics, Henan Normal University, Xinxiang, Henan 453007, China
[email protected]
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE),
Central China Normal University, Wuhan, Hubei 430079, China
The absence of semitauonic decays of charmed hadrons makes the decay processes mediated by the quark-level c→ d τ^+ ν_τ transition inadequate for probing a generic new physics (NP) with all kinds of Dirac structures. To fill in this gap, we consider in this paper the quasielastic neutrino scattering process ν_τ+n→τ^-+Λ_c, and propose searching for NP through the polarizations of the τ lepton and the Λ_c baryon. In the framework of a general low-energy effective Lagrangian, we perform a comprehensive analysis of the (differential) cross sections and polarization vectors of the process both within the Standard Model and in various NP scenarios, and scrutinize possible NP signals. We also explore the influence on our findings due to the uncertainties and the different parametrizations of the Λ_c → N transition form factors, and show that they have become one of the major challenges to further constrain possible NP through the quasielastic scattering process.
Probing new physics with polarized τ and Λ_c in quasielastic ν_τ+n→τ^-+Λ_c scattering process
Dong-Hui Zheng
August 12, 2023
=============================================================================================
§ INTRODUCTION
Over the past few years, several intriguing anomalies have been observed in the processes mediated by the quark-level b→ c l ν̅_l transitions, particularly in the ratios R_D^(∗) <cit.>,
R_D^(∗)≡ℬ(B→ D^(*)τ^-ν_τ)/ℬ(B→ D^(*)ℓ^-ν_ℓ),
with ℓ=e,μ. These anomalies continuously challenge the lepton flavor universality, a central feature of the Standard Model (SM) of particle physics, and arouse a surge of phenomenological studies of new physics (NP) beyond the SM in B physics (for recent reviews, see, e.g., Refs. <cit.>).
In view of the potential violation of the lepton flavor universality in B-meson decays, it is
also natural to investigate if such phenomena also emerge in the charm sector.
Among the various processes used to probe the phenomena, the ones mediated by the quark-level c→ d τ^+ ν_τ transition attract certain attention <cit.>. In particular, a ratio R_τ/μ, somewhat similar to R_D^(*),
can be defined as
R_τ/μ=Γ(D^+→τ^+ν_τ)/Γ(D^+→μ^+ν_μ) ,
and serve as an important avenue to test the SM in the charm sector <cit.>. Interestingly enough, the ratio R_τ/μ is constructed from the purely leptonic D-meson decays rather than from the semileptonic ones, which is in contrast to the ratios R_D^(*). The underlying reason for this is that the largest accessible phase space for semileptonic D-meson decays is
given by m_D^+-m_π^0≃ 1.735 GeV, which is smaller than the τ-lepton mass, rendering the semitauonic D-meson decays kinematically forbidden. The same conclusion also holds for the charmed-baryon decays.
The absence of semitauonic decays of charmed hadrons makes, therefore, the decay processes mediated by the c→ d τ^+ ν_τ transition suitable for probing NP with only a subset of Dirac structures. For example, the purely leptonic D-meson decays are known to be only sensitive to the axial and pseudo-scalar four-fermion operators of a general low-energy effective Lagrangian (denoted by ℒ_eff as introduced in Eq. (<ref>)), making the tauonic vector, scalar, and tensor operators seemingly inaccessible at low-energy regime <cit.>. Although these operators can be probed through the high-p_T dilepton invariant mass tails at high-energy
colliders under additional assumptions <cit.>, other new processes and observables, particularly the
low-energy ones, are still badly needed in order to pinpoint all the possible NP Dirac structures. In some cases, these low-energy processes and observables can also provide very complementary information about NP <cit.>.
In this paper, we will consider the quasielastic (QE) neutrino scattering process ν_τ+n→τ^-+Λ_c induced by the quark-level ν_τ d →τ^- c transition, and propose searching for NP through the polarizations of the τ lepton and the Λ_c baryon. We will show that such a process together with the six polarization vectors can involve all the effective operators of ℒ_eff. Based on a combined constraint on the corresponding Wilson coefficients (WCs) of these effective operators
set by the measured branching ratio of D^+→τ^+ν_τ decay <cit.>
and the analysis of the high-p_T dilepton invariant mass tails <cit.>,
we will perform a comprehensive analysis of all the observables involved both within the SM and in various NP scenarios, and scrutinize possible NP signals.
The hadronic matrix elements of the scattering process will be parametrized by the n→Λ_c transition form factors, which are in turn related to the Λ_c→ N (nucleon) form factors by complex conjugation. However, since a scattering process generally occupies a negative kinematic range (q^2<0) while a decay process happens at the positive one (q^2>0), an extrapolation of the Λ_c→ N transition form factors from positive to negative q^2 becomes necessary. This requires that the form-factor parametrization must possess analyticity in the proper q^2 range <cit.>. In this paper, we will consider three different models with three different form-factor parametrizations for the Λ_c→ N transition form factors to compute
the cross sections and polarization vectors in various NP scenarios. Our major results will be, however, based on the lattice QCD (LQCD) calculations <cit.>, since they also provide the theoretical uncertainties, which we will propagate to all the observables considered. Nonetheless, a detailed comparison of all the observables calculated with different form-factor parametrizations will be provided as well.
The paper is organized as follows. In Sec. <ref>, we begin with a
brief introduction of our theoretical framework, including the most general low-energy effective Lagrangian as well as the kinematics, the cross sections, and the various polarization vectors of the scattering process. In such a framework, we study in subsection <ref> the total cross section and the averaged polarization vectors in various NP scenarios, and then in subsection <ref> the differential cross sections and the Q^2-dependent polarization observables. In subsection <ref>, we revisit the scattering process together with the Q^2-dependent observables in the limit of small WCs (i.e., small-g_i). The subsequent two subsections contain our exploration of the influence on our findings due to the uncertainties and the different parametrizations of the Λ_c → N transition form factors. Finally, we collect our main conclusions in Sec. <ref>, and relegate further details on the form factors and explicit expressions of the various observables to the appendices.
§ THEORETICAL FRAMEWORK
§.§ Low-energy effective Lagrangian
Without introducing the right-handed neutrinos, the most general low-energy effective
Lagrangian responsible for the ν_τ d →τ^- c transition can be written as
ℒ_eff= -4G_F/√(2)V_cd [ (1+g^L_V)𝒪^L_V+
g^R_V𝒪^R_V+g^L_S𝒪^L_S
+g^R_S𝒪^R_S+g^L_T𝒪^L_T]+H.c. ,
with
𝒪^L,R_V =(c̅γ^μP_L,R d)(τ̅γ_μP_Lν_τ) ,
𝒪^L,R_S =(c̅P_L,R d)(τ̅P_Lν_τ) ,
𝒪^L_T =(c̅σ^μν P_L d)(τ̅σ_μν P_Lν_τ) ,
where P_R,L=(1±γ_5)/2 are the right- and left-handed projectors, and σ^μν= i[γ^μ,γ^ν]/2 the antisymmetric tensor. Note that the tensor operators with mixed quark and lepton chiralities vanish due to Lorentz invariance. The WCs g_i in Eq. (<ref>) parameterize possible deviations from the SM and are complex in general. Such a framework is only applicable up to an energy scale of 𝒪(m_b), with m_b the bottom-quark mass, above which new degrees of freedom would appear.
§.§ Cross section, form factors, and kinematics
The differential cross section of the QE scattering process ν_τ(k)+n(p)→τ^-(k^')+Λ_c(p^'), with p=(m_n, 0), p^'=(E_Λ_c, p^'), k=(E, k), and k^'=(E^', k^'), is given by
dσ= 1/4 p· kd^3k^'/(2π)^31/2E^'d^3p^'/(2π)^31/2E_Λ_c|ℳ|^2
× (2π)^4δ^4(p+k-p^'-k^'),
where the amplitude ℳ can be generically written as <cit.>
ℳ=4G_F/√(2)V_cd(J_H J^L
+J^α_H J_α^L
+J^αβ_H J_αβ^L),
when all the effective operators in Eq. (<ref>) are taken into account.
The lepton currents in Eq. (<ref>) are defined as
J_(αβ)^L=u̅_τ(k^', r^') Γ_(αβ)P_Lu_ν_τ(k,r),
with Γ_(αβ)=(1, γ_α,σ_αβ),
while the hadron currents as
J^(αβ)_H =⟨Λ_c(p^',s^') |c̅O^(αβ)_H d|n(p,s)⟩ ,
with
O_H =1/2(g_S+g_Pγ_5) ,
O^α_H =1/2γ^α(g_V-g_Aγ_5) ,
O^αβ_H =g^L_Tσ^αβP_L ,
where g_V,A=(1+g^L_V± g^R_V), g_S,P=(g^R_S± g^L_S),
and r and s (r^' and s^') denote the spins of initial (final) lepton and baryon, respectively. The amplitude squared |ℳ|^2 is obtained by summing up the initial- and final-state spins; more details are elaborated in Appendix <ref>.
The hadronic matrix elements ⟨Λ_c|c̅O^(αβ)_H d|n⟩ in Eq. (<ref>) are identical to the complex conjugate of ⟨ n|(c̅O^(αβ)_H d)^†|Λ_c⟩, which are further parametrized by the Λ_c → N transition form factors <cit.>. Since a scattering process generally occupies a different kinematic range (q^2<0) from that of a decay (q^2>0), theoretical analyses of the scattering process require an extrapolation of the form factors to negative q^2. Thus, the form-factor parametrizations suitable for our purpose must be analytic in the proper q^2 range.
Interestingly, there exist already several schemes that meet our selection criterion and have been utilized to parametrize the Λ_c → N form factors by various models. For instance, a dipole parametrization scheme has been employed within the MIT bag model (MBM) <cit.> and the nonrelativistic quark model (NRQM) <cit.>, and a double-pole one in the relativistic constituent quark model (RCQM) <cit.>. Although the form-factor parametrizations in each scheme do not result in pathological behaviors in the q^2<0 range, only the form factors associated with the matrix element ⟨ N|d̅γ^μP_L c|Λ_c⟩ were calculated in these models. The primary scheme we consider was initially proposed to parametrize the B→π vector form factor <cit.>, and has been recently utilized in the LQCD calculations of the Λ_c→ N transition form factors <cit.>. In contrast to other model evaluations, the LQCD calculation not only takes care of all the form factors, but also provides an error estimation. Thus, we will adopt the latest LQCD results <cit.> throughout this work. Meanwhile, given that the model calculations of the N→Λ_c form factors can significantly affect the predictions of Λ_c weak production in neutrino QE processes <cit.>, we will also analyze the QE scattering process ν_τ+n→τ^-+Λ_c in terms of the form factors calculated within the models MBM, NRQM, and RCQM in various NP scenarios; for more details about the form factors in these different models, we refer the readers to Appendix <ref>.
The kinematics of the QE scattering process is bounded by <cit.>
α-E√(λ)/m_n+2E≤
q^2 ≤α +E√(λ)/m_n+2E ,
where
α ≡ E(m_Λ_c^2- m_n^2+m^2_τ- 2m_nE)+m_n m^2_τ ,
λ ≡ m_Λ_c^4+(m_n^2+2m_nE-m^2_τ)^2-2m_Λ_c^2(m_n^2+2m_nE+m^2_τ) .
This condition indicates that the neutrino beam energy E determines the maximal and minimal values of Q^2 (Q^2=-q^2), which, in turn, implies that any constraints on Q^2_max and Q^2_min restrict the E selection. An explicit example is that a minimal requirement for E (E≳ 8.33 GeV) of the scattering process can be obtained by using the condition Q^2_max=Q^2_min; this can also be visualized in Fig. <ref> by noting the intersection point of the red and green lines that represent the E-Q^2_max and E-Q^2_min relations, respectively. Besides the kinematic constraint on Q^2_max,
we also consider the limit from our theoretical framework. As our analyses are carried out in the framework of ℒ_eff given by Eq. (<ref>), to ensure the validity of our results, we require Q^2_max to not exceed Q^2_b=16 GeV^2≈ m_b^2. Such a requirement, depicted by the blue line in Fig. <ref>, indicates an upper bound E≲ 13.41 GeV, provided that the observables one is interested in, such as the total cross section, involve Q^2_max. Otherwise, E is not bounded from above, since one can always concentrate on the lower Q^2 range, even though a high Q^2_max is available due to a high E.
It is interesting to note that the τ-optimized ν_τ flux at the Deep Underground Neutrino Experiment (DUNE) drops below 10^8 m^-2year^-1 at E_ν_τ≳ 14 GeV <cit.>, which is close to the upper bound of E shown in Fig. <ref>. If the proposed QE scattering process were measured at the DUNE, one could then explore all the observables considered in this work within the whole, available Q^2 range, while maintaining a relatively high ν_τ beam flux.
§.§ Polarization vectors of the final lepton and baryon
The polarization four-vector 𝒫_l^μ of the
τ lepton produced in the scattering process ν_τ+n→τ^-+Λ_c can be conveniently obtained by using the density matrix formalism as <cit.>
𝒫^μ_l=[ρ_l(k^')γ^μγ_5]/[ρ_l(k^')] ,
where the spin density matrix ρ_l(k^') of the τ lepton is given by
ρ_l(k^')=𝒥^(αβ,α^'β^')[Λ(k^')
Γ_(αβ)P_LΛ(k)P_RΓ_(α^'β^')Λ(k^')].
Now a clarification of the various symbols in Eq. (<ref>) is in order. Firstly, the hadronic tensor 𝒥^(αβ,α^'β^') is given by
𝒥^(αβ,α^'β^') =
1/2∑_ss^'J_H^(αβ)J_H^(α^'β^')†
=1/2[Λ(p^')
ℳ^(αβ)Λ(p)ℳ^(α^'β^')],
where ℳ_(αβ) denotes the Dirac γ structure of the hadronic matrix element
⟨Λ_c|c̅O^(αβ)_H d|n⟩ in Eq. (<ref>). Clearly, ℳ_(αβ) involves not only the WCs g_i but also the form factors. The prefactor 1/2 accounts for the spin average over the neutron spin. Secondly, ℳ^(α^'β^')=γ^0ℳ^(α^'β^')†γ^0,
Γ_(α^'β^')=γ^0Γ^†_(α^'β^')γ^0,
and Λ(k)=(k+m_k) is the spin projection operator for
a spin 1/2 fermion with momentum k and mass m_k.
The polarization four-vector 𝒫_h^μ of the produced
Λ_c baryon can be obtained in a similar way, with the spin density matrix ρ_h(p^') given by
ρ_h(p^')=ℒ_(αβ,α^'β^')[Λ(p^')
ℳ^(αβ)Λ(p)ℳ^(α^'β^')Λ(p^')],
where the leptonic tensor ℒ_(αβ,α^'β^') can be written as
ℒ_(αβ,α^'β^') =1/2∑_rr^'J^L_(αβ)J^L†_(α^'β^')
=1/2[Λ(k^')Γ_(αβ)P_LΛ(k)P_RΓ_(α^'β^')] .
The polarization vectors 𝒫^μ_l,h of the outgoing lepton and baryon can be decomposed as
𝒫^μ_l,h=P^l,h_L (N^l,h_L)^μ+P^l,h_P (N^l,h_P)^μ+P^l,h_T(N^l,h_T)^μ,
where the two sets of four-vectors N^l,h_L, N^l,h_T, and N^l,h_P are defined, respectively, as
(N^l_L)^μ =(|k^'|/m_τ,k^'0k^'/m_τ|k^'|),
(N^l_T)^μ =(0, k×k^'/|k×k^'|),
(N^l_P)^μ =(0, k^'×(k×k^')/|k^'×(k×k^')|),
and
(N^h_L)^μ =(|p^'|/m_Λ_c,p^'0p^'/m_Λ_c|p^'|),
(N^h_T)^μ =(0, p^'×k/|p^'×k|),
(N^h_P)^μ =(0, p^'×(p^'×k)/|p^'×(p^'×k)|),
indicating the longitudinal (L), transverse (T), and perpendicular (P) directions of the final τ lepton and Λ_c baryon in their reaction planes accordingly. It is then fairly straightforward to obtain the components of 𝒫^μ in Eq. (<ref>) through
P^l,h_a=-(𝒫· N^l,h_a), a=L, P, T.
In order to study the dependence of these polarization vectors on the neutrino energy E, one often introduces the average polarizations ⟨ P^l,h_a⟩, which are defined as <cit.>
⟨ P^l,h_a⟩=∫_Q^2_min^Q^2_maxP^l,h_a(Q^2)dσ/dQ^2dQ^2/∫_Q^2_min^Q^2_maxdσ/dQ^2dQ^2 .
To characterize the overall degree of polarization of the outgoing particles,
one can also define the overall average polarization ⟨ P^l,h⟩ as
⟨ P^l,h⟩=√(⟨ P^l,h_L⟩^2+⟨ P^l,h_P⟩^2+⟨ P^l,h_T⟩^2) .
§.§ Constraints on the WCs of ℒ_eff
Here we discuss briefly the most relevant and stringent constraints on the WCs g_i from the charmed-hadron weak decays and the high-p_T dilepton invariant mass tails.
Given that the semitauonic decays of charmed hadrons are kinematically forbidden, the D-meson tauonic decays become the only decay processes that can be used to constrain the WCs g_i in Eq. (<ref>). Here we
consider the D^+→τ^+ν_τ decay with its branching ratio given by <cit.>
ℬ(D^+→τ^+ν_τ) =G^2_F |V_cd|^2 f^2_D^+ m_D^+m^2_τ/8π(1-m^2_τ/m^2_D^+)^2
×|g_A+g_Pm^2_D^+/m_τ(m_c+m_d)|^2 τ_D^+ ,
where g_A=(1+g^L_V-g^R_V) and g_P=(g^R_S-g^L_S).
With the inputs listed in Table <ref>, |V_cd|=0.22438± 0.00044 from the global fit <cit.>,
and f_D^+=212.0± 0.7 MeV from an average of the LQCD simulations <cit.>, we can obtain the parameter space of the WCs g_i allowed by the measured branching fraction ℬ(D^+→τ^+ν_τ)
=(1.20±0.24_stat±0.12_syst)× 10^-3 <cit.>; similar works have also been conducted in Refs. <cit.>. At the same time, much severe constraints on these WCs can be set through the analysis of the dilepton invariant mass tails in
pp→τν_τ processes at high p_T <cit.>.
We combine in Fig. <ref> the aforementioned constraints at 1σ level. It can be seen that the most stringent constraints on g^L_S, g^R_S, and g^L_T are set by the high-p_T dilepton invariant mass tails, whereas the bound on g^R_V is entirely dominated by the measured branching fraction of D^+→τ^+ν_τ decay. Meanwhile, although the boundary of the real part of g^L_V is set by the high-p_T dilepton invariant mass tails, the imaginary part is bounded by the D^+→τ^+ν_τ decay, as indicated by the overlapped region in color.
It should be pointed out that all the constraints denoted by the colored regions in Fig. <ref> are obtained by setting the rest of WCs to zero. In order to fully constrain the NP operators in Eq. (<ref>), more processes and observables are clearly needed. To this end, our proposed QE scattering process together with the polarization vectors, as will be shown in the next section, is exactly what one is looking for.
§ NUMERICAL RESULTS AND DISCUSSIONS
§.§ Total cross section and average polarizations
We start with studying the dependence of the total cross section σ^', with σ^'=8π m^2_nσ/(G^2_F|V_cd|^2), and the average polarizations ⟨ P^l,h_a⟩ on the neutrino energy E. To this end, by considering the range E∈ [8.33,13] GeV and varying randomly the WCs g_i within the overlapped regions in color shown in Fig. <ref>, we plot in Fig. <ref> the total cross section σ of the scattering process ν_τ+n→τ^-+Λ_c as a function of E, both within the SM and in various NP scenarios. It can be seen that a few interesting features already emerge. Firstly, a higher beam energy clearly favors a larger total cross section. Secondly, the cross section can be significantly affected by the allowed parameter space of g^R_V and g^L_V shown in Fig. <ref>, especially by the former. This in turn indicates larger opportunity for improving the limits on g^L,R_V through the proposed QE scattering process. On the other hand, for g^L_S, g^R_S, and g^T_S, stringent constraints from the high-p_T dilepton invariant mass tails do not leave much room for possible deviations from the SM predication. Thus, to further improve the constraints on these g_i, demanding experimental setup for the scattering process is certainly necessary. Finally, although the allowed parameter spaces for g^L_S and g^R_S are identical to each other (see Eq. (<ref>) and Fig. <ref>), their imprints on the total cross section are slightly different, especially
at the high-E range, as shown vaguely in Fig. <ref>.
Such a small difference in fact results from the different interference between 𝒪^L_V and 𝒪^L,R_S; more details could be found in Appendix <ref>.
In Fig. <ref>, we show the average polarizations ⟨ P^l_L⟩, ⟨ P^l_P⟩,
⟨ P^l_T⟩, and ⟨ P^l⟩ of the τ lepton as a function of the neutrino beam energy E, and observe some interesting features. Let us start with the SM case. As depicted by the red curves in Fig. <ref>, both the absolute value of ⟨ P^l_L⟩ and ⟨ P^l⟩ increase along with the increase of E, which is not surprising, since the τ lepton produced through the scattering process ν_τ+n→τ^-+Λ_c is left-handed in the SM. On the other hand, ⟨ P^l_P⟩ reaches its peak around E=10 GeV, while ⟨ P^l_T⟩=0 irrespective of E. Note that a nonzero ⟨ P^l_P⟩ at the low-E range makes it a very interesting observable and in turn makes our proposed low-energy QE neutrino scattering process a valuable avenue to probe possible NP effects; clearly ⟨ P^l_T⟩=0 in the SM qualifies itself as a null test observable. Measuring a tiny but nonzero ⟨ P^l_T⟩ induced by NP effects could be, however, challenging, as indicated by the plots in the third column of Fig. <ref>.
We now move on to the NP scenarios. From the four figures on the top panel in Fig. <ref>, we observe that contributions to the average polarization ⟨ P^l_a⟩ from the SM and the WC g^L_V are indistinguishable, because they share the same effective operator 𝒪_V^L (see Eq. (<ref>)). For the WC g^R_V, on the other hand, large deviations of ⟨ P^l_L,P⟩ from their SM predictions are highly possible due to the sizable allowed parameter space of g^R_V; interestingly enough, ⟨ P^l_T⟩ still remains zero in this case. Similar to the case of total cross section, possible deviations of all ⟨ P^l_a⟩ from their SM predictions are relatively small for the WCs g^L_S, g^R_S, and g^L_T due to the stringent constraints on them from the current data, as shown in Fig. <ref>.
Interesting phenomena also emerge in the NP scenarios. First of all, there exist small differences between ⟨ P^l_a⟩ associated with the WCs g^L_S and g^R_S due to their different operator structures. One can see that the overall blue bands from g^L_S are slightly broader than from g^R_S in the ⟨ P^l_a⟩-E planes. Secondly, the fuzzy blue bands in the ⟨ P^l_L⟩-E plane from g^L,R_S imply that a relatively low E would be more favored to further constrain these two WCs, whereas a relatively high E would be more advantaged for further limiting g^L_T through ⟨ P^l_L⟩. Interestingly, the situation is totally opposite in probing
g^L_S, g^R_S, and g^L_T through ⟨ P^l_T⟩. Finally, only a relatively high E is favored for probing g^L_S, g^R_S, and g^L_T through ⟨ P^l_P⟩.
We also show in Fig. <ref> the average polarizations ⟨ P^h_L⟩, ⟨ P^h_P⟩,
⟨ P^h_T⟩, and ⟨ P^h⟩ of the Λ_c baryon as a function of E. Contrary to the τ-lepton case, the predominant polarization mode of the Λ_c baryon produced through the QE scattering
process is perpendicular in the SM. Although ⟨ P^h_L⟩ increases along with the increase of E, its overall polarization degree is only of 𝒪(10^-2). Intriguingly, ⟨ P^h_T⟩ is always zero irrespective of E, being the same as for ⟨ P^l_T⟩ in the τ-lepton case.
For the NP scenarios in this case, we observe some similar features too. Firstly, the average polarizations ⟨ P^h_a⟩ induced by g^L_V are also indistinguishable from the SM case, as shown by the first four plots on the top panel in Fig. <ref>, due to the same reason as mentioned in the τ-lepton case. Secondly, large opportunity exists clearly for improving the limit on g^R_V through the measurements of these polarization vectors of the Λ_c baryon. It is also interesting to note that, contrary to ⟨ P^l_T⟩, ⟨ P^h_T⟩ would be nonzero in the presence of the very same NP scenario. Finally, all ⟨ P^h_a⟩ induced by g^L_S, g^R_S, and g^L_T are small due to the stringent constraints on these WCs. However, given the small value of ⟨ P^h_L⟩ predicted in the SM, possible deviations induced by these NP effects, especially by g^L_S, could still reach more than 100% at the low-E range.
§.§ Differential cross section and Q^2-dependent polarizations
Taking into account of the interesting behavior of ⟨ P^l_P⟩ shown in Fig. <ref> and the neutrino beam flux at the DUNE <cit.>, we will set E=10 GeV as our benchmark beam energy and explore how the differential cross section
and the polarizations P^l,h_a vary with respect to Q^2. To this end, by letting the WCs g_i vary randomly within the overlapped regions in color shown in Fig. <ref>, we plot in Fig. <ref> the resulting differential cross sections and polarizations P^l_a as a function of Q^2 in various NP scenarios, together with the SM predictions. Let us scrutinize the SM case first. As indicated by the red curves in Fig. <ref>, the differential cross section of the scattering process clearly prefers the low-Q^2 range in the SM. A similar conclusion also holds for the polarization P^l_L, even though it experiences an interesting crossover at Q^2≃ 8 GeV^2. Intriguingly, P^l_P peaks roughly at Q^2≃ 8 GeV^2, while unsurprisingly P^l_T remains zero irrespective of Q^2.
We now move on to discuss the NP scenarios shown in Fig. <ref>, from which an overall pattern similar to that found in the previous subsection is observed. Firstly, large deviations from the SM prediction for the differential cross section are only possible for g^L,R_V, while large deviations for the polarizations P^l_L,P can be expected only for g^R_V. Secondly, due to the stringent experimental constraints on g^L_S, g^R_S, and g^L_T, deviations from the SM predictions for the differential cross section and the polarizations P^l_a in these three NP scenarios become much smaller.
To have a clearer view of these deviations from the corresponding SM predictions, let us define δ[dσ^']/dQ^2=dσ^'/dQ^2|_NP -dσ^'/dQ^2|_SM and δ P^l,h_a=P^l,h_a|_NP-P^l,h_a|_SM, and
plot them explicitly in Fig. <ref>.
It can be seen that the deviations δ P^l_a remain zero for the g^L_V scenario, making the (differential) cross section the only avenue to probe g^L_V through the scattering process. For g^R_V, a relatively high Q^2 is certainly preferred to observe the potentially maximum deviations of δ P^l_L,P but at the expense of observing the maximum deviation of the differential cross section, whereas δ P^l_T=0 in the whole Q^2 range. In the case of g^L_S and g^R_S, the overall deviation patterns are similar for the three polarizations P^l_a, but opposite for the differential cross section. Nonetheless, a relatively high Q^2, e.g., Q^2≃ 7.5 GeV^2, can be of benefit for probing g^L_S and g^R_S through these observables. In the presence of g^L_T, on the other hand, the situation is a little complicated. From the four plots on the bottom panel, we observe that the low-Q^2 range clearly favors the deviations of the differential cross section and the polarization P^l_L, whereas the slightly high-Q^2 range favors the deviations of δ P^l_P,T. Overall, the maximum δ P^l_L and δ P^l_P could reach 1 and 0.45 in the g^R_V scenario, respectively. However, the maximum δ P^l_L for the g^L_S, g^R_S, and g^L_T scenarios could only amount to 0.02 at most, and the situation is even more challenging for δ P^l_P,T.
Similar analyses can be applied to the polarizations P^h_L, P^h_P, and
P^h_T of the Λ_c baryon. In Fig. <ref>, we show the variations of these observables with respect to Q^2 both within the SM and in the various NP scenarios. It is found that P^h_a exhibit somewhat similar characteristics as of P^l_a shown in Fig. <ref>. For instance, both P^l_T and P^h_T remain zero irrespective of the kinematics Q^2. In addition, both P^l_L and P^h_L experience a crossover and peak at the low-Q^2 range. Finally, both P^l_P and P^h_P drop down to zero at Q^2_min and Q^2_max. Nevertheless, distinct differences between these two sets of observables are also observed. An obvious example is that P^l_P and P^h_P peak at different Q^2, Q^2≃ 7 GeV^2 for the former whereas Q^2≃ 4 GeV^2 for the later. In addition, the crossover positions of P^l_L and P^h_L lie at different Q^2, Q^2≃ 8 GeV^2 for the former whereas Q^2≃ 4 GeV^2 for the later.
With regard to δ P^h_a, the deviations from the corresponding SM predictions for the polarizations P^h_a, our results are shown in Fig. <ref>. Compared to the deviations δ P^l_a shown in Fig. <ref>, δ P^h_a are characterized by some new features. Firstly, for the g^R_V scenario, in contrast to δ P^l_L and δ P^l_P, δ P^h_L and δ P^h_P prefer a relatively low Q^2, which, intriguingly, is also favored by the deviation of the differential cross section shown in <ref>. In addition, contrary to δ P^l_T, δ P^h_T is not equal to zero in this scenario. Secondly, the overall sizes of δ P^h_a in the presence of g^L_S, g^R_S, and
g^L_T are smaller than that of δ P^l_a, especially of δ P^l_P,T. Finally, for the g^R_S and g^L_T scenarios, the minima of
δ P^h_L arise both at the medium-Q^2 range, whereas the minima of δ P^l_L arise at the Q^2_min and Q^2_max, respectively.
§.§ Polarization observables in the small-g_i limit
Thus far, we have let the WCs g_i vary randomly within the overlapped regions in color shown in Fig. <ref>, which are set by the measured branching fraction of D^+→τ^+ν_τ decay <cit.> and the high-p_T dilepon invariant mass tails in pp→τν_τ processes <cit.>. However, the stringent experimental constraints on g^L_S, g^R_S, and g^L_T, together with the overall small deviations δ P^l,h_a shown in Figs. <ref> and <ref>,
strongly motivate us to focus on the small-g_i regions. In this case, we can expand the polarizations P^l,h_a in terms of g_i and keep only the terms up to 𝒪(g_i). As will be shown in the following, examining P^l,h_a in such a limit can shed light on the interesting behaviors of the deviations δ P^l,h_a shown in Figs. <ref> and <ref>.
Given that only a single nonzero g_i is activated at a time, the two traces in the polarization four-vector
𝒫^μ (see, e.g., Eq. (<ref>)) can be written, respectively, as
[ρ] = D_SM+(g_i)^* D_VL,i+(g_i)D^*_VL,i+𝒪(|g_i|^2)
=D_SM+2Re[g^*_iD_VL,i]+𝒪(|g_i|^2) ,
and
[ργ^μγ^5]=𝒩^μ_SM+2Re[g^*_i𝒩^μ_VL,i ]+𝒪(|g_i|^2) ,
where D_SM and 𝒩^μ_SM stand for the SM contributions to the two traces [ρ] and [ργ^μγ^5] respectively, while D_VL,i and 𝒩^μ_VL,i denote the contributions to these two traces from the interference between the SM and the NP operator associated with g_i; explicit expressions of the various terms in the two traces can be found in Appendices <ref> and <ref>. Clearly, the pure NP contributions are of 𝒪(|g_i|^2) and can be, therefore, neglected in the small-g_i regions.
The polarization four-vector can now be approximated as
𝒫^μ ≃𝒩^μ_SM+2Re[g^*_i𝒩^μ_VL,i ]/D_SM+2Re[g^*_iD_VL,i]
≃𝒩^μ_SM/D_SM+2Re[g^*_i𝒩^μ_VL,i ]/D_SM-2Re[g^*_iD_VL,i]/D_SM𝒩^μ_SM/D_SM
=𝒫^μ_SM+2Re[g^*_i𝒩^μ_VL,i ]/D_SM-2Re[g^*_iD_VL,i]/D_SM𝒫^μ_SM
=𝒫^μ_SM+𝒫^μ_Int ,
where we have ignored all the contributions from the higher-order terms of g_i, and introduced the new polarization four-vector 𝒫^μ_Int, with
𝒫^μ_Int≡2Re[g^*_i𝒩^μ_VL,i ]/D_SM-2Re[g^*_iD_VL,i]/D_SM𝒫^μ_SM ,
which is induced by the interference between the SM and the NP operator associated with g_i. Projecting 𝒫^μ_SM and 𝒫^μ_Int onto the orthogonal bases (see Eqs. (<ref>) and (<ref>)), we eventually obtain
P^l,h_L,P =(P_SM)^l,h_L,P+Re[g_i](P_Int)^l,h_L,P ,
P^l,h_T =Im[g_i](P_Int)^l,h_T ,
where (P_SM)^l,h_T=0 has been used.
From the definition of 𝒫^μ_Int in Eq. (<ref>), one can already see that 𝒩^μ_VL,VL=𝒩^μ_SM and
D_VL,VL=D_SM for the g_V^L scenario. Since both 𝒩^μ_SM and D_SM are real, 𝒫^μ_Int vanishes, which in turn
leads to P_Int=0. In other words, it is impossible to distinguish the g_V^L scenario from the SM through the polarization vectors, which has already been observed repetitively in the previous subsections.
We then show in Fig. <ref> the variations of (P_SM)^l,h_a and (P_Int)^l,h_a with respect to Q^2 in various NP scenarios, where, for simplicity, we have labeled them by P^l,h_a uniformly. From the P^l_L-Q^2 plot (the left-top one in Fig. <ref>), one can see that (P_Int)^l_L behave in a very similar way for the g^L_S and g^R_S scenarios, which are denoted by the blue and green dashed curves, respectively. Together with another straightforward observation that the magnitude of (P_Int)^l_L at any Q^2 in the g^L_S case is always larger than in the g^R_S case, it is expected that the maximum deviation δ P^l_L for the g^L_S and g^R_S scenarios must have a similar shape but with the former broader than the latter. Such a behavior has already been observed explicitly in Fig. <ref>. From the dot-dashed red curve, one can see that, below Q^2=5 GeV^2, (P_Int)^l_L for the g^R_V scenario behaves just like that for g^L_S, indicating a similar shape of δ P^l_L within this Q^2 range. However, the shape of δ P^l_L will become narrower as Q^2 increases, even narrower than that for the g^R_S scenario at the high-Q^2 range. Such an expectation is, unfortunately, buried by the vast shadow of the δ P^l_L-Q^2 plot shown in Fig. <ref>, due to the large parameter space of g^R_V. Compared with (P_SM)^l_L denoted by the black curve, the absolute value of (P_Int)^l_L for the g^L_T scenario (see the long-dashed purple curve) is always larger. However, their difference decreases as Q^2 increases, justifying that a low Q^2 is favored to observe a maximum deviation of δ P^l_L in the g^L_T scenario, as shown in Fig. <ref>.
We now turn to discuss the various curves in the P^l_P-Q^2 plot (the middle-top one in Fig. <ref>). It can be seen that the blue and green dashed curves behave in a similar way—both peak roughly at
Q^2=7.5 GeV^2—but with different magnitudes. Although the dashed purple curve also peaks at a similar Q^2, it behaves less dramatically within the range Q^2∈ [3,7] GeV^2. Nonetheless, all these three curves drop to zero at Q^2_min and Q^2_max. Taking all these points into account, one can understand the interesting features of the deviation δ P^l_P observed in the g^L_S, g^R_S, and g^L_T scenarios, as shown in Fig. <ref>. For the g^R_V scenario, as indicated by the dot-dashed red curve, the deviation δ P^l_P shall behave similarly to that for the g^R_S scenario but with a more flattened curvature at the high-Q^2 range. This is different from the behaviors of the deviation δ P^l_L in the same NP scenarios, as can be clearly seen from Fig. <ref>.
Let us move on to the P^h_L-Q^2 plot (the left-bottom one in Fig. <ref>). A couple of interesting observations can already be made. Firstly, all the curves except the dashed blue one experience
a crossover, indicating that the deviations δ P^h_L become zero at a certain Q^2 for the g^R_V, g^R_S, and g^L_T scenarios, while in the g^L_S case δ P^h_L increases along with the increase of Q^2. Secondly, both the green and purple dashed curves cross the P^h_L=0 line at Q^2≃ 5 GeV^2, suggesting a similar behavior of δ P^h_L for the g^R_S and g^L_T scenarios. However, the pattern of small at the Q_min^2 while relatively large at the
Q^2_max region of (P_Int)^h_L reveals that the deviation δ P^h_L must be narrower at the Q_min^2 than
at the Q_max^2 one for the g^R_S scenario. This is contrary to the pattern of δ P^h_L observed for the g^L_T scenario, as can be clearly seen from Fig. <ref>. Finally, the similar behavior between the green and blue dashed curves indicates that the deviation δ P^h_L shall behave similarly for the g^R_V and g^R_S scenarios, provided they are both assumed at the small-g_i limit.
With regard to the P^h_P-Q^2 plot (the middle-bottom one in Fig. <ref>), one can draw some similar observations as from the P^l_P-Q^2 plot. For instance, the similar behavior between the green and blue dashed curves predicts a close shape of δ P^h_P for the g^L_S and g^R_S scenarios. The small difference between the resulting values of (P_Int)^h_P, however, suggests that the deviation δ P^h_P for the former must be broader than for the latter, as shown in Fig. <ref>. Intriguingly, the blue and green dashed curves in the P^h_P-Q^2 and P^l_P-Q^2 plots indicate that both δ P^h_P and δ P^l_P in these two scenarios shall peak at Q^2≃ 7 GeV^2. Another example is that the red and purple dashed curves reveal that the maximal δ P^h_P occurs a low Q^2, Q^2≃ 3.4 GeV^2, contrary to its counterpart δ P^l_P, for the g^R_L and g^L_T scenarios.
We conclude this subsection by giving a brief discussion of the P^l_T-Q^2 and P^h_T-Q^2 plots in Fig. <ref>. Since the SM contribution to P^l,h_T denoted by the dark line is zero, the shapes of other curves reveal not only the behaviors of the polarizations P^l,h_T but also the deviations δ P^l,h_T directly. It can be seen that the blue, green, and purple dashed curves in the P^l_T-Q^2 plot behave similarly in general with only some small differences, indicating a similar pattern of the deviation δ P^l_T for the g^L_S, g^R_S, and g^L_T scenarios. The blue, green, and purple dashed curves in the P^h_T-Q^2 plot, on the other hand, behave quite differently in both their curvatures and peak positions, justifying the distinct shapes of δ P^h_T for the g^L_S, g^R_S, and g^L_T scenarios, as shown in Fig. <ref>. Finally, it is interesting to note that the deviation δ P^h_T for the g^R_V scenario in Fig. <ref> behaves just like the dashed red curve in Fig. <ref>, even though the latter works only in the small-g_i
limit.
§.§ Observables with uncertainties due to the form factors
As mentioned in subsection <ref>, one of the reasons that we adopt the LQCD calculations of the Λ_c→ N transition form factors
is that they provide us with an error estimation. Yet our calculation has only involved the central values of these inputs so far. In this subsection, we study how our predictions of the observables are affects by the uncertainties of these form factors. As a simple illustration, we focus on the NP scenarios in the presence of the WCs g^R_V and g^R_S, and consider only the Q^2-dependent observables, i.e., the differential cross section and the polarizations P^l,h_a. To this end, we firstly scan randomly g^R_V and g^R_S within the available parameter space shown in Fig. <ref> and propagate the uncertainties of the form factors to each observable for all the allowed data points of g^R_V and g^R_S. We then plot in Figs. <ref> and <ref> the central, upper, and lower values of each observable in blue, green, and red accordingly, instead of presenting them in error bars. In this way, the combined regions of the green and red ones as well as the regions between them can be naively understood as the overall uncertainty of the observable considered.
From Figs. <ref> and <ref>, we see that none of the observables in the g^R_V scenario suffers too much from the uncertainties of the form factors in general, and the dominant factor determining the overall shape of these observables is still the vast available parameter space of g^R_V. The situation in the g^R_S scenario is, on the other hand, quite contrary. The large blank spaces between the blue and green (red) regions represent the impacts on the observables from the uncertainties of the form factors, which clearly dwarf the effect of the WC g^R_S due to the stringent experimental constraint on it. The only exceptions are P^l_T and P^h_T in both NP scenarios, on which the impacts from the uncertainties of the form factors and the available parameter space of the WCs seem comparable. These observations can be easily applied to other NP scenarios too.
Besides the above comparisons, it may be also interesting to explore how the uncertainties of the observables propagate along the kinematics Q^2. To this end, let us focus on the observables in the g^R_S scenario as an illustration. Firstly, the green and red regions on the bottom panel of Figs. <ref> and <ref> clearly indicate that the overall uncertainties of the differential cross section and the polarizations P^l,h_L increase along with the increase of Q^2. Secondly, the uncertainties of P^l,h_P and P^l,h_T shrink at the Q^2_min and Q^2_max regions, mainly due to the characteristic behaviors of P^l,h_P and P^l,h_T, but the general pattern is still consistent with what we have just observed. Such a pattern is closely related to the behaviors of the form factors with respect to Q^2. As can be seen from Fig. <ref>, the uncertainties of all the form factors follow the same pattern as the observables do—the total uncertainties in particular increase dramatically along with the increase of Q^2. Because of the relatively milder behaviors of the statistical uncertainties, we take them instead of the total uncertainties into account in Figs. <ref> and <ref>, as well as in the rest of this work.
In short, although the LQCD calculation <cit.> of the Λ_c→ N transition form factors comes with an error estimation—one of its advantages over the model evaluations presented in Refs. <cit.>, the persistently increasing uncertainties along with the increase of Q^2 have become one of the major obstacles to further probe or constrain the NP scenarios through the QE neutrino scattering process. This calls for either better controls of the uncertainties of the form factors in future LQCD calculations or new model estimations of these form factors with a good error estimation within the relevant kinematic ranges.
§.§ Observables with different form-factor parametrizations
The parametrization scheme adopted in Ref. <cit.> is not the only way to describe the q^2-dependence of the Λ_c → N transition form factors; nor is the LQCD the only method for evaluating the form factors.
As discussed in subsection <ref> and detailed in Appendix <ref>, there exist already three different parametrization schemes, which can be extended to the q^2<0 range, and have been employed by the MBM, NRQM, and RCQM models, as well as the LQCD calculations. Moreover, these parametrization schemes are validated against the experimental measurements of the Λ_c semileptonic
decays reported by the BESIII Collaboration <cit.>.[Note that the BESIII Collaboration has improved the measurement of the absolute branching fraction of Λ^+_c→Λ e^+ν_e decay <cit.>.]
However, direct calculations of the QE weak production of the Λ_c baryon through the ν_μ scattering
off nuclei reveal that large deviations can arise by using the different parametrization schemes of the form factors <cit.>. Thus, we examine in this subsection if similar deviations also emerge for the observables considered here in various NP scenarios.
In Fig. <ref>, we evaluate the differential cross section and the polarizations P^l,h_a with the form factors calculated in LQCD (blue), NRQM (green), and RCQM (red), respectively.[We do not present the results with the form factors calculated in MBM, because both MBM and NRQM employ the dipole form for the q^2 dependence of the form factors <cit.> (see Appendix <ref>) for details.] To be thorough, we also take account of the 1σ-level statistical uncertainties of the form factors in the LQCD case. As an illustration, we focus only on the NP scenario in the presence of g^R_S. From Fig. <ref>, it can be seen that there exists large disparity between the red (green) and blue regions, indicating that the resulting deviations of dσ, P^l,h_L, and P^l,h_P due to the different parametrization schemes of the form factors dwarf that from the 1σ-level statistical uncertainties of the form factors in LQCD. For the polarization P^l_T, on the other hand, the overall blue region prevails over the others, indicating a totally opposite situation. Finally, comparing the red region with the overall blue one in the P^h_T-Q^2 plot, one can see that the deviation of P^h_T in RCQM from the LQCD predication can be comparable to that from the 1σ-level statistical uncertainties of the form factors in LQCD.
Interestingly, one can already extract the SM results from Fig. <ref> by shrinking each colored region into a curve, corresponding to the special case with g^R_S=0. The SM predictions of (P_Int)^l_a and (P_Int)^h_a are also demonstrated explicitly in the first columns of Figs. <ref> and <ref> respectively. It can be seen that among the three cases, the LQCD predicts the largest differential cross section of the QE scattering process in the SM, while the NRQM yields the smallest. Intriguingly, such a pattern is also consistent with that observed in the QE weak production of the Λ_c baryon through the process ν_μ+^16O→μ^- +Λ_c+X <cit.>. However, the situation becomes more complicated for other observables. For instance, the crossover behavior of P^l,h_L makes the P^l,h_L=0 line a watershed: above it the RCQM (LQCD) predicts the largest P^l_L (P^h_L), while below it the LQCD (RCQM) predicts the largest P^l_L (P^h_L). In addition, the RCQM always seems to produce a larger P^l,h_P than the NRQM does.
The small width of each fuzzy colored region in Fig. <ref> results from the variation of the WC g^R_S within the allowed parameter space shown in Fig. <ref>. To have a clearer view of this effect, we work in the small-g_i limit and plot in Figs. <ref> and <ref> the variations of (P_SM)^l,h_a
and (P_Int)^l,h_a with respect to Q^2 with the form factors calculated in LQCD (blue), NRQM (green), and RCQM (red), both within the SM and in the g^R_V, g^L_S, and g^R_S scenarios. Note that the g^L_T scenario is not considered here, because the relevant tensor form factors have not been calculated in NRQM and RCQM. Once again, the 1σ-level statistical uncertainties of the form factors have been taken into account in the LQCD case (see the yellow regions shown in Figs. <ref> and <ref>).
Since the resulting P^l,h_a due to the mixing 𝒪^L_V-𝒪^L_V correspond exactly to the SM case, which has been discussed above, let us now move on to the next three mixing scenarios. For the mixing 𝒪^L_V-𝒪^R_V, it can be seen that, contrary to (P_Int)^h_T, the resulting (P_Int)^l,h_L,P from NRQM and RCQM are opposite in sign. At the same time, the absolute values of all the (P_Int)^l,h_a in these two models are compatible with the LQCD results at 1σ level. Interestingly, these observations can be applied to the mixing 𝒪^L_V-𝒪^R_S as well, except that the NRQM forecasts the largest absolute value of (P_Int)^l,h_T. For the mixing 𝒪^L_V-𝒪^L_S, on the other hand,
one can see that the RCQM always predicts the smallest absolute values of all the (P_Int)^l,h_a, while the NRQM results are in general compatible with that of the LQCD at 1σ level.
§ CONCLUSION
The absence of semitauonic decays of charmed hadrons makes the decay processes mediated by the quark-level c→ d τ^+ ν_τ transition inadequate for probing a generic NP with all kinds of Dirac structures. To fill in this gap, we have considered in this paper the QE neutrino scattering process ν_τ+n→τ^-+Λ_c, and proposed searching for NP through the polarizations of the τ lepton and the Λ_c baryon. Working in the framework of a general low-energy effective Lagrangian given by Eq. (<ref>) and using the combined constraints from the measured branching fraction of the purely leptonic D^+→τ^+ν_τ decay
and the analysis of the high-p_T dilepton invariant mass tails in pp→τν_τ processes, we have performed a comprehensive analysis of the (differential) cross sections and polarization vectors of the ν_τ+n→τ^-+Λ_c process both within the SM and in various NP scenarios.
For the SM, we have shown that the dominant polarization mode of the outgoing τ lepton is longitudinal and that of the Λ_c baryon is perpendicular, whereas ⟨ P_T⟩ of both the τ and Λ_c remain zero in such a QE scattering process. In addition, the interesting behavior of ⟨ P^l_P⟩ as a function of the neutrino beam energy E qualifies itself as a potentially valuable observable to search for possible NP effects. We have also explored the variations of the polarization vectors with respect to the kinematics Q^2, and observed that both P^l_L and P^h_L experience a crossover, and the peaks of P^l_P and P^h_P are both reached within the available kinematic range, though happening at different Q^2 points.
For the various NP scenarios, the overall observation we have made is that, due to the stringent experimental constraints on the WCs g^L_S, g_S^R, and g^L_T, there exist only small (of 𝒪(10^-2)) deviations between the SM and the g^L_S, g_S^R, and g^L_T scenarios for the polarizations P^l,h_a. By contrast, the larger available parameter space of the WC g_V^R makes all the deviations δ P^l,h_a much bigger, except for δ P^l_T which, interestingly, remains zero. As for the g^L_V scenario, since it shares the same effective operator
𝒪^L_V with the SM, all the deviations δ P^l,h_a always remain zero, making the (differential) cross section the only avenue to probe g^L_V through the QE scattering process.
We have also explored the impacts of the uncertainties of the Λ_c→ N transition form factors, and shown that they have become one of the major challenges to further probe or constrain the NP scenarios through the QE neutrino scattering process. Furthermore, we have considered three different form-factor parametrization schemes employed by NRQM,
RCQM, and LQCD respectively, and discovered large differences among their predictions in the SM, which is also consistent with the observation made in the QE weak production of the Λ_c baryon through the ν_μ scattering off nuclei <cit.>. For the NP scenarios, on the other hand, the deviations δ P^l,h_a predicted in NRQM and RCQM are still compatible with the LQCD results at 1σ level.
Finally, we would like to make a comment on the detection of the outgoing τ lepton. It is known that the τ lepton decays rapidly and its decay
products contain at least one undetected neutrino, making its identification very challenging and its polarization states hard to be measured. However, its kinematic and polarization information can be inferred from the visible final-state kinematics in its subsequent decays <cit.>. In our upcoming work, we will incorporate this idea into our further analysis of the QE scattering process.
§ ACKNOWLEDGMENTS
This work is supported by the National Natural Science Foundation of China under Grant Nos. 12135006 and 12075097, as well as by the Fundamental Research Funds for the Central Universities under Grant Nos. CCNU22LJ004 and CCNU19TD012 and
Pingyuan Scholars Program under Grand No.5101029470306.
§ DEFINITIONS AND PARAMETRIZATIONS OF THE Λ_C→ N TRANSITION FORM FACTORS
The Λ_c→ N transition form factors used in this work are defined in
the helicity basis <cit.>. For the vector and axial-vector currents, their hadronic matrix elements are defined, respectively, by
⟨ N(p,s)|d̅γ^μc|Λ_c(p^',s^')⟩
=u̅_N(p,s)[f_0(q^2)(m_Λ_c- m_N)q^μ/q^2
+f_+(q^2)m_Λ_c+m_N/s_+(p^'μ+p^μ-(m^2_Λ_c-m^2_N)q^μ/q^2)
+ f_⊥(q^2)(γ^μ-2m_N/s_+p^'μ-2m_Λ_c/s_+p^μ)]u_Λ_c(p^',s^') ,
and
⟨ N(p,s)|d̅γ^μγ^5c|Λ_c(p^',s^')⟩
=-u̅_N(p,s)γ^5[g_0(q^2)(m_Λ_c+ m_N)q^μ/q^2
+ g_+(q^2)m_Λ_c-m_N/s_-(p^'μ+p^μ-(m^2_Λ_c- m^2_N)q^μ/q^2)
+g_⊥(q^2)(γ^μ+2m_N/s_-p^'μ-2m_Λ_c/s_-p^μ)]u_Λ_c(p^',s^') ,
where q=p^'-p and s_±=(m_Λ_c± m_N)^2-q^2. From Eqs. (<ref>) and (<ref>), we can obtain the hadronic matrix elements of the scalar and pseudo-scalar currents through the equation of motion, which are given, respectively, by
⟨ N(p,s)|d̅c|Λ_c(p^',s^')⟩
=(m_Λ_c- m_N)/m_c- m_df_0(q^2)u̅_N(p,s)u_Λ_c(p^',s^') ,
⟨ N(p,s)|d̅γ^5 c|Λ_c(p^',s^')⟩
=(m_Λ_c+m_N)/m_c+m_dg_0(q^2)u̅_N(p,s)γ^5 u_Λ_c(p^',s^') ,
where m_d(c) denotes the d(c)-quark running mass. Finally, the hadronic matrix element of the tensor current is given by
⟨ N(p,s)|d̅iσ_μν c|Λ_c(p^',s^')⟩
=u̅_N(p,s) [2h_+ p^'_μp_ν - p^'_νp_μ/s_+
+ h_⊥(m_Λ_c + m_N/q^2
× (q_μγ_ν- q_νγ_μ)- 2(1/q^2
+1/s_+)(p^'_μp_ν- p^'_νp_μ))
+h_+(iσ_μν-2/s_-[m_Λ_c(p_μγ_ν-p_νγ_μ)
-m_N(p^'_μγ_ν-p^'_νγ_μ)+p^'_μp_ν-p^'_νp_μ])
+h_⊥m_Λ_c-m_N/q^2s_-((m_Λ_c^2-m_N^2-q^2)(γ_μp^'_ν-γ_νp^'_μ)
-(m_Λ_c^2-m_N^2+q^2)(γ_μp_ν-γ_νp_μ)
+2(m_Λ_c-m_N)(p^'_μp_ν-p^'_νp_μ))]u_Λ_c(p^',s^') ,
where σ_μν=i[γ_μ,γ_ν]/2.
The parametrization of these Λ_c→ N transition form factors calculated in LQCD takes the form <cit.>
f(q^2)=1/1-q^2 /(m_pole^f)^2∑_n=0^n_max a_n^f[z(q^2)]^n,
with the expansion variable defined by
z(q^2)=√(t_+-q^2)-√(t_+-t_0)/√(t_+-q^2)+√(t_+-t_0),
where t_+=(m_D+m_π)^2 is set equal to the threshold of Dπ two-particle states, t_0=(m_Λ_c-m_N)^2 determines which value of q^2 gets mapped to z=0, and the lowest poles are already factored out before the z expansion, with their quantum numbers and masses listed in Table IV of Ref. <cit.> for the different form factors. The central values and the statistical uncertainties of a^f_0,1,2 in Eq. (<ref>) for different form factors f(q^2) have been evaluated in Ref. <cit.> by the nominal fit (n_max=2), while their systematic uncertainties can be obtained by a combined analysis of both the nominal and higher-order (n_max=3) fits; we refer the readers to Ref. <cit.> for further details.
In Fig. <ref>, we depict the central values as well as the statistical and total uncertainties of these form factors with respect to the kinematics Q^2. It can be seen that the yellow region in each plot increases dramatically along with the increase of Q^2, indicating a larger total uncertainty in the larger Q^2 range. Although the statistical uncertainties also increases along with the increase of Q^2, their behaviors are much milder. Therefore, we only take the statistical uncertainties into account throughout this work.
Often, the hadronic matrix elements of the vector and axial-vector currents are expressed in terms of another set of form factors f^V,A_i with i=1,2,3, which are related to the ones introduced in
Eqs. (<ref>) and (<ref>) by
f_0 =q^2/m_Λ_c(m_Λ_c-m_N)f^V_3+f^V_1 ,
f_+ =f^V_1+q^2/m_Λ_c(m_Λ_c+m_N)f^V_2 ,
f_⊥ =f^V_1+f^V_2(m_N+m_Λ_c)/m_Λ_c ,
g_0 =-q^2/m_Λ_c(m_Λ_c-m_N)f^A_3+f^A_1 ,
g_+ =f^A_1-q^2/m_Λ_c(m_Λ_c-m_N)f^A_2 ,
g_⊥ =f^A_1+f^A_2(m_N-m_Λ_c)/m_Λ_c .
To parametrize the q^2 dependence of this set of form factors, the RCQM model adopts the following double-pole form <cit.>:
f(q^2)=f(0)/1-aŝ+bŝ^2 ,
with ŝ=q^2/m_Λ_c^2, where the values of the parameters f(0), a, and b are listed in Table <ref>. On the other hand, the MBM and NRQM models employ both the monopole and dipole parametrizations for these form factors <cit.>. For simplicity, we only consider the later, which has the following form:
f(q^2)=A/(1-q^2/M_R^2)^2 ,
where the values of the parameters A and M_R are reported in Table <ref>. We refer the readers to Ref. <cit.> for more details about the form-factor parametrizations in different models.
In Fig. <ref>, we show the Q^2 dependence of these six form factors associated with the matrix elements of the vector and axial-vector currents in the suitable kinematic range (Q^2>0) for the low-E QE scattering process. It can be seen that the LQCD predicts the largest values for all these six form factors. Especially for f_0, f_+, and g_+, the central values provided by the three models lie outside the 1σ error bars of the LQCD calculations. Interestingly enough, the NRQM model produces the lowest values for f_0,+,⊥, while the MBM model provides the lowest values for g_0,+,⊥.
Finally, it should be mentioned that another set of form factors has also been employed to parametrize the transition matrix elements of the vector and axial-vector currents. They can be related to f^V,A_i in a trivial way, and have been investigated in the (light-cone) QCD sum rule approach (see, e.g., Refs. <cit.>) and the light-front constituent quark model (see, e.g., Refs. <cit.>). However, since the form factors f^V,A_3 were not calculated, the results presented in these references will not be considered in this work.
§ AMPLITUDE SQUARED OF THE QE SCATTERING PROCESS
For the convenience of future discussions, we provide here the explicit expression of the amplitude squared |ℳ|^2 of the QE scattering process ν_τ(k)+n(p)→τ^-(k')+Λ_c(p') mediated by the general effective Lagrangian ℒ_eff (see Eq. (<ref>)). With all the operators of ℒ_eff taken into account, the amplitude square |ℳ|^2 is given explicitly by
|ℳ|^2= |1+g_V^L|^2𝒜_V_L-V_L
+|g_V^R|^2𝒜_V_R-V_R
+(|g_S^L|^2+|g_S^R|^2)𝒜_S_L-S_L
+|g_T^L|^2𝒜_T_L-T_L
+2Re[g_S^Lg_S^R*]𝒜_S_L-S_R
+2Re[g_V^R(1+g_V^L*)]𝒜_V_R-V_L
+2Re[g_S^L(1+g_V^L*)+g_S^Rg_V^R*]𝒜_S_L-V_L
+2Re[g_S^R(1+g_V^L*)+g_S^Lg_V^R*]𝒜_S_R-V_L
+2Re[g_T^L(1+g_V^L*)]𝒜_T_L-V_L
+2Re[g_T^Lg_V^R*]𝒜_T_L-V_R
+2Re[g_T^Lg_S^LL*]𝒜_T_L-S_L
+2Re[g_T^Lg_S^R*]𝒜_T_L-S_R ,
where the various subscripts attached to the different 𝒜 on the right-hand side represent the possible interference between the two operators (see Ref. <cit.> for more details). Note that, because of the chiral structures of the lepton and quark currents involved, 𝒜 with different subscripts can be identical to each other, e.g.,
𝒜_S_L-V_L = 𝒜_S_R-V_R and thus only one of them is kept in Eq. (<ref>). The amplitudes associated with other interference terms that are not shown in Eq. (<ref>) are all zero. For convenience, we provide here the explicit expressions of the 𝒜 on the right-handed side of Eq. (<ref>) as
𝒜_V_L-V_L= m_τ^2 (m_τ^2-q^2)/2q^4[f_0^2 (m_Λ_c-m_n)^2 s_++g_0^2 (m_Λ_c+m_n)^2s_-]-m_τ^2(m_Λ_c^2-m_n^2) /q^4
×(f_0 f_++g_0 g_+)[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]
+[f_+^2 (m_Λ_c+m_n)^2/2q^4 s_+.
.+ g_+^2 (m_Λ_c-m_n)^2/2q^4 s_-]{4 m_n^2 q^4 (4 E^2-m_τ^2+q^2)+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)
×[8 E m_n q^2+m_τ^2 (m_Λ_c^2-m_n^2-q^2)]}+(f_⊥^2/s_++ g_⊥^2/s_-){8 E^2 m_n^2 q^2+(m_τ^2-q^2)
×[2 m_Λ_c^2 q^2-4 E m_n (m_n^2-m_Λ_c^2+q^2)-(m_Λ_c^2-m_n^2)^2+2 m_n^2 m_τ^2-q^4]}
-2f_⊥ g_⊥[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]
,
𝒜_V_R-V_R= m_τ^2 (m_τ^2-q^2)/2q^4[f_0^2 (m_Λ_c-m_n)^2 s_++g_0^2 (m_Λ_c+m_p)^2s_-]-m_τ^2(m_Λ_c^2-m_n^2) /q^4
×(f_0 f_++g_0 g_+)[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]
+[f_+^2 (m_Λ_c+m_n)^2/2q^4 s_+.
.+ g_+^2 (m_Λ_c-m_n)^2/2q^4 s_-]{4 m_n^2 q^4 (4 E^2-m_τ^2+q^2)+(m_τ^2-q^2) (m_Λ_c^2-m_p^2-q^2)
×[8 E m_n q^2+m_τ^2 (m_Λ_c^2-m_n^2-q^2)]}+(f_⊥^2/s_++ g_⊥^2/s_-){8 E^2 m_n^2 q^2+(m_τ^2-q^2)
×[2 m_Λ_c^2 q^2-4 E m_n (m_n^2-m_Λ_c^2+q^2)-(m_Λ_c^2-m_n^2)^2+2 m_n^2 m_τ^2-q^4]}
+2f_⊥ g_⊥[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]
,
𝒜_S_L-S_L= m_τ^2-q^2 2m_c^2[f_0^2 (m_Λ_c-m_n)^2 s_++g_0^2 (m_Λ_c+m_n)^2 s_-] ,
𝒜_T_L-T_L= -8(h_+^2s_++h̃_+^2s_-){4 m_τ^4 m_n^2 + m_Λ_c^4 (q^2-m_τ^2)+2 m_Λ_c^2 (m_τ^2-q^2) (4 E m_n+m_n^2
+q^2)+q^2 (4 E m_n+m_n^2 + q^2)^2- m_τ^2 [m_n^4+6 m_n^2 q^2+q^4+8 E m_n (m_n^2+ q^2)]}
+16[h_⊥^2(m_Λ_c+m_n)^2s_+q^4+h̃_⊥^2(m_Λ_c-m_n)^2s_-q^4]
{2 m_n (2 E+m_n) q^4 (2 E m_n+q^2)
- m_τ^2 q^2 (m_n^2 + q^2) (4 E m_n + m_n^2 + q^2) + m_Λ_c^4 m_τ^2 (m_τ^2 - q^2) +m_τ^4 (m_n^4 + q^4)
- 2 m_Λ_c^2 (m_τ^2 - q^2) [m_τ^2 (m_n^2 + q^2)-2 E m_n q^2]}-32 m_τ^2 (m_Λ_c^2 - m_n^2)/q^4
×[m_Λ_c^2 (m_τ^2 - q^2) - m_τ^2 (m_n^2 + q^2) +
q^2 (4 E m_n + m_n^2 + q^2)]h_⊥ h̃_⊥ ,
𝒜_V_R-V_L= m_τ^2 (m_τ^2-q^2)/2q^4[f_0^2 (m_Λ_c-m_n)^2 s_+-g_0^2 (m_Λ_c+m_n)^2s_-]-m_τ^2(m_Λ_c^2-m_n^2) /q^4
×(f_0 f_+-g_0 g_+)[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]
+[f_+^2 (m_Λ_c+m_n)^2/2q^4 s_+.
.- g_+^2 (m_Λ_c-m_n)^2/2q^4 s_-]{4 m_n^2 q^4 (4 E^2-m_τ^2+q^2)+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)
×[8 E m_n q^2+m_τ^2 (m_Λ_c^2-m_n^2-q^2)]}+(f_⊥^2/s_+- g_⊥^2/s_-){8 E^2 m_n^2 q^2+(m_τ^2-q^2) .
.×[2 m_Λ_c^2 q^2-4 E m_n (m_n^2-m_Λ_c^2+q^2)-(m_Λ_c^2-m_n^2)^2+2 m_n^2 m_τ^2-q^4]} ,
𝒜_S_L-V_L= m_τ(q^2-m_τ^2)/2m_c q^2[f_0^2 (m_Λ_c-m_n)^2 s_++g_0^2 (m_Λ_c+m_n)^2s_-]+m_τ(m_Λ_c^2-m_n^2) /2m_c q^2
×(f_0 f_++g_0 g_+)[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)] ,
𝒜_S_R-V_L= m_τ(q^2-m_τ^2)/2m_c q^2[f_0^2 (m_Λ_c-m_n)^2 s_+-g_0^2(m_Λ_c+m_n)^2s_-]
+m_τ(m_Λ_c^2-m_n^2) /2m_c q^2
×(f_0 f_+-g_0 g_+)[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)] ,
𝒜_T_L-V_L= -2m_τ/q^2{[m_Λ_c^2 (m_τ^2 - q^2) - m_τ^2 (m_n^2 + q^2) + q^2 (4 E m_n + m_n^2 + q^2)][(m_Λ_c- m_n)
×(f_0 h_+ +2f_⊥ h̃_⊥)+(m_Λ_c + m_n) (g_0 h̃_+ + 2g_⊥ h_⊥)]-(m_τ^2 - q^2)[(m_Λ_c+ m_n)
× s_-(f_+ h_+ + 2f_⊥ h_⊥) +(m_Λ_c- m_n)s_+(g_+ h̃_+ + 2g_⊥ h̃_⊥)]} ,
𝒜_T_L-V_R= -2m_τ/q^2{[m_Λ_c^2 (m_τ^2 - q^2) - m_τ^2 (m_n^2 + q^2) + q^2 (4 E m_n + m_n^2 + q^2)][(m_Λ_c- m_n)
× (f_0 h_+ +2f_⊥ h̃_⊥)-(m_Λ_c + m_n) (g_0 h̃_+ + 2g_⊥ h_⊥)]-(m_τ^2 - q^2)[(m_Λ_c+ m_n)
× s_-(f_+ h_+ + 2f_⊥ h_⊥)-(m_Λ_c- m_n)s_+(g_+ h̃_+ + 2g_⊥ h̃_⊥)]} .
§ DETAILS OF THE POLARIZATION VECTORS OF Τ AND Λ_C
We now present the explicit expressions of P_L^l,h, P_P^l,h, and P_T^l,h of the outgoing τ and Λ_c. These components of the polarization vectors are defined in Eq. (<ref>) and read
P_a^l,h =-(𝒫· N_a)^l,h
=Tr[ρ_l,hγ_5 N_a]/Tr[ρ_l,h]
=𝒜_a^(l,h)/2m_(τ,Λ_c)|ℳ|^2 .
Note that the trace over the spin density matrices ρ_l,h has been replaced in the last step by
Tr[ρ_l,h]=2m_(τ,Λ_c)|ℳ|^2 ,
which can be inferred from Eqs. (<ref>) and (<ref>), and the amplitude squared |ℳ|^2 has been given in Eq. (<ref>). In addition, the trace in the numerator has been redefined as 𝒜_a^(l,h), which are given, respectively, by
𝒜_a^l= |1+g_V^LL|^2𝒜^l_V_L-V_L
+|g_V^R|^2𝒜^l_V_R-V_R
+(|g_S^L|^2+|g_S^R|^2)𝒜^l_S_L-S_L
+|g_T^L|^2𝒜^l_T_L-T_L+2Re[g_V^R(1+g_V^L*)𝒜^l_V_R-V_L
+2Re[g_T^L(1+g_V^L*)𝒜^l_T_L-V_L]
+2Re[(g_S^L(1+g_V^L*)+g_S^Rg_V^R*)𝒜^l_S_L-V_L]
+2Re[g_S^Lg_S^R*𝒜^l_S_L-S_R]
+2Re[(g_S^R(1+g_V^L*)+g_S^Lg_V^R*)𝒜^l_S_R-V_L]
+2Re[g_T^Lg_V^R*𝒜^l_T_L-V_R]
+2Re[g_T^Lg_S^L*𝒜^l_T_L-S_L]
+2Re[g_T^Lg_S^R*𝒜^l_T_L-S_R] ,
𝒜_a^h= |1+g_V^LL|^2𝒜^h_V_L-V_L
+|g_V^R|^2𝒜^h_V_R-V_R
+(|g_S^L|^2-|g_S^R|^2)𝒜^h_S_L-S_L
+|g_T^L|^2𝒜^h_T_L-T_L+2Re[g_V^R(1+g_V^L*)𝒜^h_V_R-V_L]
+2Re[g_T^L(1+g_V^L*)𝒜^h_T_L-V_L]
+2Re[g_S^L(1+g_V^L*)𝒜^h_S_L-V_L]
+2Re[g_S^R(1+g_V^L*)𝒜^h_S_R-V_L]
+2Re[g_S^Rg_V^R*𝒜^h_S_R-V_R]+2Re[g_S^Lg_V^R*𝒜^h_S_L-V_R]
+2Re[g_T^Lg_V^R*𝒜^h_T_L-V_R]
+2Re[g_T^Lg_S^L*𝒜^h_T_L-S_L]
+2Re[g_T^Lg_S^R*𝒜^h_T_L-S_R]+2Re[g_S^Lg_S^R*𝒜^h_S_L-S_R] .
The explicit expressions of all the 𝒜^l,h on the right-hand side of Eqs. (<ref>) and (<ref>) are presented as follows:
𝒜^l_V_L-V_L= 2m_τ^4 (N_a· k)/q^4[f_0^2 (m_Λ_c-m_n)^2 s_++g_0^2 (m_Λ_c+m_n)^2s_-]-2 m_τ^2 (m_Λ_c^2-m_n^2)/q^4
× (f_0 f_++g_0 g_+){(N_a· k) [4 E m_n q^2+(m_τ^2-q^2) (2 m_Λ_c^2-2 m_n^2-q^2)]
.
.-q^2(m_τ^2-q^2) (N_a· p+N_a· p^')}+2m_τ^2[ f_+^2 (m_Λ_c+m_n)^2/q^4 s_++g_+^2 (m_Λ_c-m_n)^2/q^4 s_-]
×{(N_a· k) [(m_Λ_c^2-m_n^2)(m_τ^2 (m_Λ_c^2-m_n^2-q^2)-q^2 (2 m_Λ_c^2-4m_n E -2m_n^2+q^2)) .
.+q^4(4 m_Λ_c^2-q^2)]-q^2 (N_a· p+N_a· p^')[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]}
-8m_τ^2 f_⊥ g_⊥[(N_a· p) (m_τ^2-q^2-2 E m_n)+2 E m_n (N_a· p^')]
+4 m_τ^2( f_⊥^2/s_++ g_⊥^2/s_-){(N_a· p) [2 E m_n (m_Λ_c^2-m_p^2+q^2)+(m_τ^2-q^2) (m_Λ_c^2+m_p^2-q^2)]
+2 m_n (N_a· p^') [E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]} ,
𝒜^l_V_R-V_R= 2m_τ^4 (N_a· k)/q^4[f_0^2 (m_Λ_c-m_p)^2 s_++g_0^2 (m_Λ_c+m_n)^2s_-]-2 m_τ^2 (m_Λ_c^2-m_n^2)/q^4
× (f_0 f_++g_0 g_+){(N_a· k) [4 E m_n q^2+(m_τ^2-q^2) (2 m_Λ_c^2-2 m_n^2-q^2)]
.
.-q^2(m_τ^2-q^2) (N_a· p+N_a· p^')}+2m_τ^2[ f_+^2 (m_Λ_c+m_n)^2/q^4 s_++g_+^2 (m_Λ_c-m_n)^2/q^4 s_-]
×{(N_a· k) [(m_Λ_c^2-m_n^2)(m_τ^2 (m_Λ_c^2-m_n^2-q^2)-q^2 (2 m_Λ_c^2-4m_n E-2m_n^2+q^2)) .
.+q^4(4 m_Λ_c^2-q^2)]-q^2 (N_a· p+N_a· p^')[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]}
+8m_τ^2 f_⊥ g_⊥[(N_a· p) (m_τ^2-q^2-2 E m_n)+2 E m_n (N_a· p^')]
+4 m_τ^2( f_⊥^2/s_++ g_⊥^2/s_-){(N_a· p) [2 E m_n (m_Λ_c^2-m_n^2+q^2)+(m_τ^2-q^2) (m_Λ_c^2+m_n^2-q^2)]
+2 m_n (N_a· p^') [E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]} ,
𝒜^l_S_L-S_L= 𝒜_S_L-S_L4m_τ^2( k· N_a)/m_τ^2-q^2 ,
𝒜^l_T_L-T_L= 32 m_τ^2(h_+^2/s_++h̃_+^2/s_-){2 (N_a· p)
[2 E m_n (m_Λ_c^2-m_n^2+q^2)+(m_τ^2-q^2) (m_Λ_c^2+m_n^2-q^2)]
+4 m_n (N_a· p^') [E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]
+(N_i· k) s_+ s_- }
-32m_τ^2{(N_a· k) (m_τ^2-q^2) s_-s_+ -[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]
×[(N_a· p)(m_Λ_c^2-m_n^2+q^2)+(N_a· p^') (m_n^2-m_Λ_c^2+q^2)]}[(m_Λ_c-m_n)^2h̃_⊥^2/q^4 s_-.
.+(m_Λ_c+m_n)^2h_⊥^2/q^4 s_+]-128m_τ^2 (m_Λ_c^2-m_n^2)h_⊥h̃_⊥/q^4{ (N_a· p) [2 E m_n q^2+(m_Λ_c^2-m_p^2) (m_τ^2-q^2)]
-(N_a· p^') [2 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]} ,
𝒜^l_V_R-V_L= 2m_τ^4 (N_a· k)/q^4[f_0^2 (m_Λ_c-m_n)^2 s_+-g_0^2 (m_Λ_c+m_n)^2s_-]-2 m_τ^2 (m_Λ_c^2-m_n^2)/q^4
× (f_0 f_+-g_0 g_+){(N_a· k) [4 E m_n q^2+(m_τ^2-q^2) (2 m_Λ_c^2-2 m_n^2-q^2)]
.
.-q^2 (m_τ^2-q^2) (N_a· p+N_a· p^')}+2m_τ^2[ f_+^2 (m_Λ_c+m_n)^2/q^4 s_+-g_+^2 (m_Λ_c-m_n)^2/q^4 s_-]
×{(N_a· k) [(m_Λ_c^2-m_n^2)(m_τ^2 (m_Λ_c^2-m_n^2-q^2)-q^2 (2 m_Λ_c^2-4m_n E-2m_n^2+q^2)) .
.+q^4(4 m_Λ_c^2-q^2)]-q^2 (N_a· p+N_a· p^')[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]}
+4 m_τ^2( f_⊥^2/s_+- g_⊥^2/s_-){(N_a· p) [2 E m_n (m_Λ_c^2-m_n^2+q^2)+(m_τ^2-q^2)
.
. × (m_Λ_c^2+m_n^2-q^2)]+2 m_n (N_a· p^') [E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]} ,
𝒜^l_S_L-V_L= -2 m_τ^3 (N_a· k)/m_c q^2{f_0^2 (m_Λ_c-m_n)^2 s_++g_0^2 (m_Λ_c+m_n)^2 s_-}-2 m_τ(m_Λ_c^2-m_n^2) /m_c q^2
×(f_0 f_++g_0 g_+){q^2 (m_τ^2-q^2) (N_a· p^')+(N_a· p-N_a· p^')
×[2 E m_n q^2+(m_Λ_c^2-m_n^2) (m_τ^2-q^2)]-2 i q^2 ε _{k},{k^'},{N_a},{p}} ,
𝒜^l_S_R-V_L= -2 m_τ^3 (N_i· k)/m_c q^2{f_0^2 (m_Λ_c-m_n)^2 s_+-g_0^2 (m_Λ_c+m_n)^2 s_-}-2 m_τ(m_Λ_c^2-m_n^2) /m_c q^2
×(f_0 f_+-g_0 g_+){q^2 (m_τ^2-q^2) (N_a· p^')+(N_a· p-N_a· p^')
×[2 E m_n q^2+(m_Λ_c^2-m_p^2) (m_τ^2-q^2)]-2 i q^2 ε _{k},{k^'},{N_a},{p}} ,
𝒜^l_T_L-V_L= -8 m_τ^3[(m_Λ_c-m_n)f_0h_+/q^2+(m_Λ_c+m_n)g_0h̃_+/q^2]
[ (N_a· p) (2 E m_n-m_τ^2+q^2)
.
.-2 (E m_n (N_a· p^')+i ε _{k}{k^'}{N_a}{p})]+[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]
×[2 i ε_{k}{k^'}{N_a}{p}+(N_a· p) (m_Λ_c^2-m_n^2+m_τ^2-2 E m_n)+(N_a· p^')(2 E m_n-m_Λ_c^2+m_n^2+q^2)]
×[8m_τ(m_Λ_c+m_n)f_⊥ h_⊥/s_+q^2+8m_τ(m_Λ_c-m_n)g_⊥h̃_⊥/s_-q^2]+8m_τ(q^2-m_τ^2)
×[(m_Λ_c-m_n)f_⊥h̃_⊥/q^2+(m_Λ_c+m_n)g_⊥ h_⊥/q^2][2 i ε _{k}{k^'}{N_a}{p}+(N_a· p) (m_Λ_c^2-m_n^2+m_τ^2-2 E m_n)
.
. +(N_a· p^') (2 E m_n-m_Λ_c^2+m_n^2+q^2)]+8m_τ{[4 m_n^2 q^2 (m_τ^2-2 E^2)
.
.+2 E m_n (m_τ^2-3 q^2) (m_n^2-m_Λ_c^2+q^2)-q^2 s_- s_+](N_a· p+N_a· p^')-(m_τ^2+q^2)
×[(m_τ^2-q^2) (m_n^2-m_Λ_c^2+q^2)-4 E m_n q^2] (N_a· p)-2 i [4 E m_n q^2+(m_τ^2-q^2)
.
.×(m_Λ_c^2-m_n^2-q^2)] ε _{k}{k^'}{N_a}{p}}[ (m_Λ_c+m_n)f_+h_+/s_+q^2+ (m_Λ_c-m_n)g_+h̃_+/s_-q^2]
,
𝒜^l_T_L-V_R =-8 m_τ^3[(m_Λ_c-m_n)f_0h_+/q^2-(m_Λ_c+m_n)g_0h̃_+/q^2][ (N_a· p) (2 E m_n-m_τ^2+q^2)
.
.-2 (E m_n (N_a· p^')+i ε _{k}{k^'}{N_a}{p})]+[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]
×[2 i ε_{k}{k^'}{N_a}{p}+(N_a· p) (m_Λ_c^2-m_n^2+m_τ^2-2 E m_n)+(N_a· p^')(2 E m_n-m_Λ_c^2
+m_n^2+q^2)][8m_τ(m_Λ_c+m_n)f_⊥ h_⊥/s_+q^2-8m_τ(m_Λ_c-m_n)g_⊥h̃_⊥/s_-q^2]+8m_τ(q^2-m_τ^2)
×[(m_Λ_c-m_n)f_⊥h̃_⊥/q^2-(m_Λ_c+m_n)g_⊥ h_⊥/q^2][2 i ε _{k}{k^'}{N_a}{p}+(N_a· p) (m_Λ_c^2-m_n^2
.
. +m_τ^2-2 E m_n)+(N_a· p^') (2 E m_n-m_Λ_c^2+m_n^2+q^2)]+8m_τ{[4 m_n^2 q^2 (m_τ^2-2 E^2)
.
.+2 E m_n (m_τ^2-3 q^2) (m_n^2-m_Λ_c^2+q^2)-q^2 s_- s_+](N_a· p+N_a· p^')-(m_τ^2+q^2)
×[(m_τ^2-q^2) (m_n^2-m_Λ_c^2+q^2)-4 E m_n q^2] (N_a· p)-2 i [4 E m_n q^2+(m_τ^2-q^2)
.
.×(m_Λ_c^2-m_n^2-q^2)] ε _{k}{k^'}{N_a}{p}}[ (m_Λ_c+m_n)f_+h_+/s_+q^2- (m_Λ_c-m_n)g_+h̃_+/s_-q^2]
,
𝒜^l_T_L-S_L= 8 m_τ^2/m_c[2 i ε _{k},{N_a},{p},{p^'}+(N_a· p) (2 E m_n-m_τ^2+q^2)-2 E m_n (N_a· p^')]
×[f_0 h_+ (m_Λ_c-m_n)+g_0 h̃_+ (m_Λ_c+m_n)] ,
𝒜^l_T_L-S_R= 8 m_τ^2/m_c[2 i ε _{k},{N_a},{p},{p^'}+(N_a· p) (2 E m_n-m_τ^2+q^2)-2 E m_n (N_a· p^')]
×[f_0 h_+ (m_Λ_c-m_n)-g_0 h̃_+ (m_Λ_c+m_n)] ,
𝒜^l_S_L-S_R= 𝒜_S_L-S_R2m_τ (k· N_a)/m_τ^2-q^2 ,
𝒜^h_V_L-V_L= -2m_Λ_c{(N_a· k) [4 E m_n (m_n^2-m_Λ_c^2)+(q^2-m_τ^2) (m_Λ_c^2+3 m_n^2-q^2)+2 s_+ s_-]
.
.+(N_a· p+N_a· p^') [4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]}
×[(m_Λ_c+m_n)f_+ f_⊥/s_++(m_Λ_c-m_n)g_+ g_⊥/s_-]+2f_0 g_0m_τ^2(m_Λ_c^2-m_n^2)/q^4(m_τ^2-q^2)
×[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2
+m_n^2-q^2)]+4 m_Λ_c[4 E m_n q^2+(m_τ^2-q^2)
× (m_Λ_c^2-m_n^2-q^2)]{[s_+s_--2 E m_n (m_Λ_c^2-m_n^2+q^2)+(q^2-m_τ^2) (m_Λ_c^2+m_n^2-q^2)] (N_a· p)
.
.-[s_+s_-+2 E m_n (m_n^2-m_Λ_c^2+q^2)+2 m_n^2 (q^2-m_τ^2)](N_a· p^')}
×[(m_Λ_c+m_n)f_+ g_⊥+(m_Λ_c-m_n)f_⊥ g_+]/q^2s_- s_++[(m_Λ_c-m_n)f_0 g_⊥/q^2 s_-+(m_Λ_c+m_n)f_⊥ g_0/q^2 s_+]
×4 m_Λ_c m_τ^2{(N_a· p )[2 E m_n (m_Λ_c^2-m_n^2+q^2)+(m_τ^2-q^2) (m_Λ_c^2+m_n^2-q^2)]
.
.+2 m_n (N_a· p^')[E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]
+ (N_a· k) s_+s_-}+2(f_⊥^2/s_++g_⊥^2/s_-)
×[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)][2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]
+4f_⊥ g_⊥/s_- s_+{8 E^2 m_n^2 q^2+(m_τ^2-q^2) [2 m_Λ_c^2 (m_n^2+q^2)-4 E m_n (m_p^2-m_Λ_c^2+q^2)
..
..-m_n^4+2 m_n^2 m_τ^2-q^4]}[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]
-2m_τ^2{[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)] [2 m_Λ_c^2( N_a· p)-(N_a· p^' )
..
..× (m_Λ_c^2+m_n^2-q^2)]}[f_0 g_+ (m_Λ_c-m_n)^2/q^4 s_-+f_+g_0 (m_Λ_c+m_n)^2/q^4 s_+]
-2(m_Λ_c^2-m_n^2)f_+ g_+/q^4 s_- s_+
×{16 E^2 m_n^2 q^4+(m_τ^2-q^2) [m_τ^2 (m_n^2-m_Λ_c^2+q^2)^2
-4 m_n q^2 (2 E(m_n^2-m_Λ_c^2+q^2)+m_n q^2)]}
×[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)] ,
𝒜^h_V_R-V_R= -2m_Λ_c{(N_a· k) [4 E m_n (m_n^2-m_Λ_c^2)+(q^2-m_τ^2) (m_Λ_c^2+3 m_n^2-q^2)+2 s_+ s_-]
.
.+(N_a· p+N_a· p^') [4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]}
×[(m_Λ_c+m_n)f_+ f_⊥/s_++(m_Λ_c-m_n)g_+ g_⊥/s_-]+2f_0 g_0m_τ^2(m_Λ_c^2-m_n^2)/q^4(m_τ^2-q^2)
×[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2
+m_n^2-q^2)]+4 m_Λ_c[4 E m_n q^2+(m_τ^2-q^2)
× (m_Λ_c^2-m_n^2-q^2)]{[s_+s_--2 E m_n (m_Λ_c^2-m_n^2+q^2)+(q^2-m_τ^2) (m_Λ_c^2+m_n^2-q^2)] (N_a· p)
.
.-[s_+s_-+2 E m_n (m_n^2-m_Λ_c^2+q^2)+2 m_n^2 (q^2-m_τ^2)](N_a· p^')}
×[(m_Λ_c+m_n)f_+ g_⊥+(m_Λ_c-m_n)f_⊥ g_+]/q^2s_- s_++[(m_Λ_c-m_n)f_0 g_⊥/q^2 s_-+(m_Λ_c+m_n)f_⊥ g_0/q^2 s_+]
×4 m_Λ_c m_τ^2{(N_a· p) [2 E m_n (m_Λ_c^2-m_n^2+q^2)+(m_τ^2-q^2) (m_Λ_c^2+m_n^2-q^2)]
.
.+2 m_n (N_a· p^')[E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]+ (N_a· k) s_+s_-}+2(f_⊥^2/s_++g_⊥^2/s_-)
×[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)][2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_p^2-q^2)]
-4f_⊥ g_⊥/s_- s_+{8 E^2 m_n^2 q^2+(m_τ^2-q^2) [2 m_Λ_c^2 (m_n^2+q^2)-4 E m_n (m_n^2-m_Λ_c^2+q^2)
..
..-m_n^4+2 m_n^2 m_τ^2-q^4]}[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]
-2m_τ^2{[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)] [2 m_Λ_c^2 (N_a· p)-(N_a· p^')
..
..× (m_Λ_c^2+m_n^2-q^2)]}[f_0 g_+ (m_Λ_c-m_n)^2/q^4 s_-+f_+g_0 (m_Λ_c+m_n)^2/q^4 s_+]
+2(m_Λ_c^2-m_n^2)f_+ g_+/q^4 s_- s_+
×{16 E^2 m_n^2 q^4+(m_τ^2-q^2) [m_τ^2 (m_n^2-m_Λ_c^2+q^2)^2
-4 m_n q^2 (2 E (m_n^2-m_Λ_c^2+q^2)
..
..+m_n q^2)]}[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)] ,
𝒜^h_S_L-S_L= 2 f_0 g_0 (m_Λ_c^2-m_n^2) (m_τ^2-q^2)/m_c^2[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)] ,
𝒜^h_T_L-T_L= 32m_τ^2[h̃_⊥^2 (m_Λ_c-m_n)^2/s_-q^4+h_⊥^2 (m_Λ_c+m_n)^2/s_+q^4][(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)+4 E m_n q^2]
×[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]-64m_Λ_c[(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2).
.+4 E m_n q^2]{[s_-s_+-2 E m_n (m_Λ_c^2-m_n^2+q^2)+(q^2-m_τ^2)(m_Λ_c^2+m_n^2-q^2)](N_a· p)
.
.-[s_-s_++2 m_n E_ν_τ(m_n^2-m_Λ_c^2+q^2)+2m_n^2(q^2-m_τ^2)](N_a· p^')}
×h_+ h̃_⊥ (m_Λ_c-m_n)+h̃_+ h_⊥ (m_Λ_c+m_n)/s_-s_+q^2+[h_+ h_⊥ (m_Λ_c+m_n)/s_+q^2+h̃_+ h̃_⊥ (m_Λ_c-m_n)/s_-q^2]
×64m_Λ_cm_τ^2{[s_-s_+-2 E m_n (m_Λ_c^2-m_n^2+q^2)+(q^2-m_τ^2) (m_Λ_c^2+m_n^2-q^2)] (N_a· p)
.
. -[s_-s_++2 m_n E (m_n^2-m_Λ_c^2+q^2)+2m_n^2 (q^2-m_τ^2)](N_a· p^')}
-64 h_⊥h̃_⊥(m_Λ_c^2-m_n^2)/s_-s_+q^4{(m_τ^2-q^2)[m_τ^2(s_-s_++2 m_n^2 q^2)-4 m_n q^2 E (m_n^2
..
..-m_Λ_c^2+q^2)-2 m_n^2 q^4]+8 E^2 m_n^2 q^4}[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]
-32 h_+ h̃_+/s_-s_+{(m_τ^2-q^2)[-4m_n^2 (q^2-m_τ^2)-8m_n E (m_n^2- m_Λ_c^2+q^2)-s_-s_+] +16 E^2 m_n^2 q^2}
×[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)] ,
𝒜^h_V_R-V_L= 8 i m_Λ_c[f_+ g_⊥ (m_Λ_c+m_n)-f_⊥ g_+ (m_Λ_c-m_n)] ε_{k}{k^'}{N_a}{p}+2 [4 E m_n q^2
.
. +(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)][2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)](f_⊥^2/s_+-g_⊥^2/s_-)
-2 m_Λ_c{[2s_+s_-+4 E m_n (m_n^2-m_Λ_c^2)
+(q^2-m_τ^2) (m_Λ_c^2+3 m_n^2-q^2)](N_a· k)
.
.+ [4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)]}(N_a· p+N_a· p^')
×[f_+ f_⊥ (m_Λ_c+m_n)/s_+-g_+ g_⊥ (m_Λ_c-m_n)/s_-] ,
𝒜^h_S_L-V_L= -4 i m_Λ_c m_τε _{k}{N_a}{p}{p^'}/m_c[(m_Λ_c+m_n)g_0 g_⊥+(m_Λ_c-m_n)f_0 f_⊥]
-2 m_Λ_c m_τ/m_c{(m_Λ_c+m_p)f_⊥ g_0/s_++(m_Λ_c-m_n)f_0 g_⊥/s_-}{(N_a· p) [2 E m_n (m_Λ_c^2-m_n^2+q^2)
..
.+(m_τ^2-q^2) (m_Λ_c^2+m_n^2-q^2)]+
2 m_n (N_a· p^') [E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]
.+(N_a· k )s_+s_-}+m_τ[4 E m_n q^2+(m_τ^2-q^2)
(m_Λ_c^2-m_n^2-q^2)][2 m_Λ_c^2( N_a· p)
.
. -(N_a· p^' )(m_Λ_c^2+m_n^2-q^2)]
[(m_Λ_c+m_n)^2f_+ g_0/m_c q^2s_++(m_Λ_c-m_n)^2f_0 g_+/m_c q^2s_-]
+2f_0 g_0m_τ(m_Λ_c^2-m_n^2)/m_c q^2(q^2-m_τ^2)
[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]
,
𝒜^h_S_R-V_L =4 i m_Λ_c m_τε _{k}{N_a}{p}{p^'}/m_c[(m_Λ_c+m_n)g_0 g_⊥-(m_Λ_c-m_n)f_0 f_⊥]
-2 m_Λ_c m_τ/m_c{(m_Λ_c-m_n)f_0 g_⊥/s_--(m_Λ_c+m_n)f_⊥ g_0/s_+}{(N_a· p) [2 E m_n (m_Λ_c^2-m_p^2+q^2)
..
.+(m_τ^2-q^2) (m_Λ_c^2+m_n^2-q^2)]+2 m_n (N_a· p^')
[E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]
.+(N_a· k )s_+s_-}+m_τ[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)][2 m_Λ_c^2 (N_a· p)
.
.-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]
[(m_Λ_c-m_n)^2f_0 g_+/m_c q^2s_--(m_Λ_c+m_n)^2f_+ g_0/m_c q^2s_+]
,
𝒜^h_S_L-V_R= 4 i m_Λ_c m_τε _{k}{N_a}{p}{p^'}/m_c[(m_Λ_c+m_n)g_0 g_⊥-(m_Λ_c-m_n)f_0 f_⊥]
+2 m_Λ_c m_τ/m_c{(m_Λ_c-m_n)f_0 g_⊥/s_--(m_Λ_c+m_n)f_⊥ g_0/s_+}{(N_a· p) [2 E m_n (m_Λ_c^2-m_n^2+q^2)
..
.+(m_τ^2-q^2) (m_Λ_c^2+m_n^2-q^2)]+2 m_n (N_a· p^')
[E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]
.+(N_a· k) s_+s_-}-m_τ[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2
-q^2)][2 m_Λ_c^2 (N_a· p)
.
.-(N_a· p^' )(m_Λ_c^2+m_n^2-q^2)]
[(m_Λ_c-m_n)^2f_0 g_+/m_c q^2s_--(m_Λ_c+m_n)^2f_+ g_0/m_c q^2s_+]
,
𝒜^h_S_R-V_R =4 i m_Λ_c m_τε _{k}{N_a}{p}{p^'}/m_c[(m_Λ_c+m_n)g_0 g_⊥+(m_Λ_c-m_n)f_0 f_⊥]
+2 m_Λ_c m_τ/m_c{(m_Λ_c-m_n)f_0 g_⊥/s_-
+(m_Λ_c+m_n)f_⊥ g_0/s_+}{(N_a· p) [2 E m_n (m_Λ_c^2-m_n^2+q^2)
..
.+(m_τ^2-q^2) (m_Λ_c^2+m_n^2-q^2)]+2 m_n (N_a· p^')
[E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]
.+(N_a· k) s_+s_-}-m_τ[4 Em_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)][2 m_Λ_c^2 (N_a· p)
.
.-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]
[(m_Λ_c-m_n)^2f_0 g_+/m_c q^2s_-+(m_Λ_c+m_n)^2f_+ g_0/m_c q^2s_+]
-2f_0 g_0m_τ(m_Λ_c^2-m_n^2)/m_c q^2(q^2-m_τ^2) [2 m_Λ_c^2
(N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)] ,
𝒜^h_T_L-V_L= 16 i m_Λ_c m_τ[(m_Λ_c^2-m_n^2) (f_0 h_⊥+g_0 h̃_⊥+f_+ h̃_⊥ +g_+ h_⊥)/q^2+f_⊥h̃_++g_⊥ h_+]
ε _{k}{k^'}{N_a}{p}
+8m_Λ_cm_τ[g_+ h̃_⊥ (m_Λ_c-m_n)^2/s_-q^2+f_+ h_⊥ (m_Λ_c+m_n)^2/s_+q^2]{[s_-s_+-2 E m_n (m_Λ_c^2-m_n^2+q^2)
..
.+(q^2-m_τ^2) (m_Λ_c^2+m_n^2-q^2)]
(N_a· p)-[s_-s_++2 E m_n (m_n^2-m_Λ_c^2+q^2)
.
..+2 m_n^2 (q^2-m_τ^2)](N_a· p^')}-(g_⊥h̃_+/s_-+f_⊥ h_+/s_+)
×8m_Λ_cm_τ{s_-s_+(N_a· k)+(N_a· p) [2 E m_n (m_Λ_c^2-m_n^2+q^2)+(m_τ^2-q^2) (m_Λ_c^2
.
.+m_n^2-q^2)]+2 m_n (N_a· p^') [E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]}
+4m_τ[(m_Λ_c-m_n) (2 g_⊥h̃_⊥-f_0 h̃_+)/s_-q^2+(m_Λ_c+m_p) (2 f_⊥ h_⊥-g_0 h_+)/s_+q^2][4 E m_n q^2
.
.+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)][2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]
-4 m_τ[(m_Λ_c+m_n) (2 g_⊥ h_⊥-f_+ h̃_+)/q^2+(m_Λ_c-m_n) (2 f_⊥h̃_⊥-g_+ h_+)/q^2] (m_τ^2-q^2)
×[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]+
4m_Λ_cm_τ{(m_τ^2+q^2)s_-s_+(N_a· k)
+[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)] [(N_a· p) (m_Λ_c^2-m_n^2+q^2)
.
.+(N_a· p^') (m_n^2-m_Λ_c^2+q^2)]}[f_0 h̃_⊥ (m_Λ_c-m_n)^2/s_-q^4+g_0 h_⊥ (m_Λ_c+m_n)^2/s_+q^4]
,
𝒜^h_T_L-V_R= 16 i m_Λ_c m_τ[(m_Λ_c^2-m_n^2) (f_0 h_⊥-g_0 h̃_⊥+f_+ h̃_⊥ -g_+ h_⊥)/q^2+f_⊥h̃_+-g_⊥ h_+]ε _{k}{k^'}{N_a}{p}
+8m_Λ_cm_τ[f_+ h_⊥ (m_Λ_c+m_n)^2/s_+q^2-g_+ h̃_⊥ (m_Λ_c-m_n)^2/s_-q^2]{[s_-s_+-2 E m_n (m_Λ_c^2-m_n^2+q^2)
..
.+(q^2-m_τ^2) (m_Λ_c^2+m_n^2-q^2)](N_a· p)-[s_-s_++2 E m_n (m_n^2-m_Λ_c^2+q^2)
.
..+2 m_n^2 (q^2-m_τ^2)](N_a· p^')}-(f_⊥ h_+/s_+-g_⊥h̃_+/s_-)
×8m_Λ_cm_τ{s_-s_+(N_a· k)+(N_a· p) [2 E m_n (m_Λ_c^2-m_n^2+q^2)+(m_τ^2-q^2) (m_Λ_c^2
..
..+m_n^2-q^2)]+2 m_n (N_a· p^') [E (m_n^2-m_Λ_c^2+q^2)+m_n (q^2-m_τ^2)]}
+4m_τ[(m_Λ_c+m_n) (2 f_⊥ h_⊥+g_0 h_+)/s_+q^2-
(m_Λ_c-m_n) (2 g_⊥h̃_⊥+f_0 h̃_+)/s_-q^2][4 E m_n q^2
.
.+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)][2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]
-4 m_τ[(m_Λ_c-m_n) (2 f_⊥h̃_⊥+g_+ h_+)/q^2-(m_Λ_c+m_n) (2 g_⊥ h_⊥+f_+ h̃_+)/q^2] (m_τ^2-q^2)
×[2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]+
4m_Λ_cm_τ{(m_τ^2+q^2)s_-s_+(N_a· k)
+[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)] [(N_a· p) (m_Λ_c^2-m_n^2+q^2)
.
.+(N_a· p^') (m_p^2-m_Λ_c^2+q^2)]}[f_0 h̃_⊥ (m_Λ_c-m_n)^2/s_-q^4-g_0 h_⊥ (m_Λ_c+m_n)^2/s_+q^4]
,
𝒜^h_T_L-S_L= 4 [4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)] [2 m_Λ_c^2 (N_a· p)-(N_a· p^') (m_Λ_c^2+m_n^2-q^2)]
×[(m_Λ_c-m_n)f_0 h̃_+/m_c s_-+(m_Λ_c+m_n)g_0 h_+/m_c s_+]
-16 i m_Λ_c (m_Λ_c^2-m_n^2)ε _{k}{k^'}{N_a}{p}/m_c
×(f_0 h_⊥+g_0 h̃_⊥)
-4{[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)] [(N_a· p) (m_Λ_c^2-m_n^2+q^2)
..
.+(N_a· p^') (m_n^2-m_Λ_c^2+q^2)]+(N_a· k) (m_τ^2+q^2) [m_Λ_c^4-2 m_Λ_c^2 (m_n^2+q^2)
.
..+(m_n^2-q^2)^2]}[m_Λ_c (m_Λ_c-m_n)^2f_0 h̃_⊥/m_c q^2 s_-+m_Λ_c (m_Λ_c+m_n)^2g_0 h_⊥/m_c q^2 s_+] ,
𝒜^h_T_L-S_R= 4 [4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_n^2-q^2)] [2 m_Λ_c^2 (N_a· p)-(N_a· p^' )(m_Λ_c^2+m_n^2-q^2)]
×[(m_Λ_c-m_n)f_0 h̃_+/m_c s_--(m_Λ_c+m_n)g_0 h_+/m_c s_+]
-16 i m_Λ_c (m_Λ_c^2-m_n^2)ε _{k}{k^'}{N_a}{p}/m_c
×(f_0 h_⊥-g_0 h̃_⊥)
-4{[4 E m_n q^2+(m_τ^2-q^2) (m_Λ_c^2-m_p^2-q^2)] [(N_a· p) (m_Λ_c^2-m_p^2+q^2)
..
.+(N_a· p^') (m_n^2-m_Λ_c^2+q^2)]+(N_a· k) (m_τ^2+q^2) [m_Λ_c^4-2 m_Λ_c^2 (m_n^2+q^2)
.
..+(m_n^2-q^2)^2]}[m_Λ_c (m_Λ_c-m_n)^2f_0 h̃_⊥/m_c q^2 s_--m_Λ_c (m_Λ_c+m_p)^2g_0 h_⊥/m_c q^2 s_+] ,
𝒜^h_S_L-S_R =0 ,
where ε_{k}{k^'}{N_a}{p}≡ε _μναβk^μk^'νN_a^αp^β, with ε being a totally antisymmetric tensor. From the equations above, it is clear that 𝒜^l,h with the same subscripts are always real.
apsrev4-1
|
http://arxiv.org/abs/2307.04056v2 | 20230708231953 | Manifold Filter-Combine Networks | [
"Joyce Chew",
"Edward De Brouwer",
"Smita Krishnaswamy",
"Deanna Needell",
"Michael Perlmutter"
] | stat.ML | [
"stat.ML",
"cs.LG",
"cs.NA",
"eess.SP",
"math.NA"
] |
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
[
====================================================================
We introduce a class of manifold neural networks (MNNs) that we call Manifold Filter-Combine Networks (MFCNs), that aims to further our understanding of MNNs, analogous to how the aggregate-combine framework helps with the understanding of graph neural networks (GNNs). This class includes a wide variety of subclasses that can be thought of as the manifold analog of various popular GNNs. We then consider a method, based on building a data-driven graph, for implementing such networks when one does not have global knowledge of the manifold, but merely has access to finitely many sample points. We provide sufficient conditions for the network to provably converge to its continuum limit as the number of sample points tends to infinity. Unlike previous work (which focused on specific graph constructions), our rate of convergence does not directly depend on the number of filters used. Moreover, it exhibits linear dependence on the depth of the network rather than the exponential dependence obtained previously. Additionally, we provide several examples of interesting subclasses of MFCNs and of the rates of convergence that are obtained under specific graph constructions.
§ INTRODUCTION
Geometric deep learning <cit.> is an emerging field that aims to extend the success of deep learning from data such as images, with a regular grid-like structure, to more irregular domains such as graphs and manifolds. As part of the rise of geometric deep learning, graph neural networks (GNNs) have rapidly emerged as an extremely active area of research in data science <cit.> and are also used in industrial applications such as Google Maps<cit.> and Amazon's product recommender system<cit.>. However, there has been much less work on the development of Manifold Neural Networks (MNNs) and much of the existing literature focuses on two-dimensional surfaces embedded in three-dimensional space <cit.>.
In this paper, we consider the more general setting of a compact, connected, d-dimensional Riemannian manifold ℳ embedded in D-dimensional space.
One of the principal challenges in extending deep learning to graphs and manifolds is developing a proper notion of convolution, which is non-trivial because there is no natural notion of translation. In the graph setting, a popular family of solutions, known as spectral methods, define convolution via the eigendecomposition of the graph Laplacian (or another suitable matrix). A limitation of this method is that explicitly computing eigendecompositions is expensive for large graphs. To overcome this obstacle, spectral graph neural networks such as ChebNet <cit.> and CayleyNet <cit.> define convolution in terms of polynomials of the graph Laplacian 𝐋=𝐃-𝐀. This leads to filters of the form h(𝐋)𝐱 where h is a polynomial and 𝐱 is a signal defined on the vertices of the graph.
With this notion of convolution, one may consider networks with layerwise update rules of the form:
𝐱^(ℓ+1)=σ(h^(ℓ)(𝐋)𝐱^(ℓ)),
where σ is a pointwise, nonlinear activation function.
If one is given multiple initial graph signals 𝐱_1,…, 𝐱_C organized into a data matrix 𝐗=(𝐱_1,…,𝐱_C) and uses multiple filters in each layer, then the layerwise update rule can be extended to
𝐱^(ℓ+1)_k=σ(∑_j=1^C h^(ℓ)_j,k(𝐋)𝐱^(ℓ)_k).
If one assumes that each filter h^ℓ_j,k belongs to a parameterized family of functions such as Chebyshev polynomials, one could then attempt to learn the optimal parameters from training data.
Inspired by this approach, Wang, Ruiz, and Ribeiro <cit.> have introduced manifold neural networks with layerwise update rules similar to (<ref>).
In particular, they assume that they are given C functions f_1,…,f_C:ℳ:→ℝ and utilize a layerwise update rule of
f^(ℓ+1)_k=σ(∑_j=1^C h^(ℓ)_j,k(ℒ)f^(ℓ)_k),
where ℒ=-div∘∇ is the Laplace-Beltrami operator, the natural analog of the graph Laplacian in the manifold setting. They then provide an analysis of the stability of such networks to absolute and relative perturbations of the Laplace-Beltrami operator.
However, many popular graph neural networks take an approach different than (<ref>). Rather than using multiple learnable filters for each input channel and then summing across channels, they instead filter each graph signal with a pre-designed operator (or operators) and then learn relationships between the filtered input signals. For example, the Graph Convolutional Network (GCN)[Here, we use the term GCN to refer to the specific network introduced in <cit.>. We will use the term GNN to refer to a general graph neural network] <cit.> performs a predesigned aggregation
𝐗→𝐀𝐗
where 𝐀=(𝐃+𝐈)^-1/2(𝐀+𝐈)(𝐃+𝐈)^-1/2 and utilizes a right-multiplication by a trainable weight matrix Θ to learn relationships between the channels. This leads to the layerwise update rule
𝐗^(ℓ+1)=σ(𝐀𝐗^(ℓ)Θ^(ℓ)),
where σ is as in (<ref>).[The matrix 𝐀 can be obtained by applying the polynomial h(λ)=1-λ/2 to a normalized version of the graph Laplacian and then some adjustments which help with the training of the network. Therefore, we can essentially think of the operation 𝐱→𝐀𝐱 as a spectral convolution.] This raises an intriguing question:
How should manifold neural networks be designed? Should they follow the lead of (<ref>) and (<ref>) and utilize multiple learnable filters for each input channel with a predesigned summation over channels or should they utilize predesigned filtering operations and incorporate learning via cross-feature operations analogous to (<ref>)?
It is likely that the answer to this question will vary depending on the dataset and the task of interest. Networks with multiple learnable filters for each channel are more general and will have greater expressive power. On the other hand, networks that, for example, use a common (either learnable or designed) filterbank shared across all channels are a more constrained family of networks. This constraint imposes a certain structure on the network and reduces the number of trainable parameters, which may provide a useful inductive bias in certain settings and may be particularly useful in low-data environments.
Another critical challenge in the development of manifold neural networks is that in many applications
one does not have global knowledge of the manifold. Instead, one is given a collection of points {x_j}_j=1^n in some high-dimensional Euclidean space ℝ^D and makes the modeling assumption that the points x_j lie on some d-dimensional manifold for d≪ D. This assumption, known as the manifold hypothesis, is frequently used in the analysis of biomedical data arising from, e.g., single-cell imaging <cit.>. This leads us to the following question:
How can one implement a manifold neural network when one does not have global knowledge of the manifold but only has access to finitely many sample points?
In order to help answer this question, several works such as <cit.> have used an approach based on Laplacian eigenmaps <cit.> (see also <cit.>) where one builds a data-driven graph 𝐆_n such that the eigenvectors and eigenvalues of the graph Laplacian approximate the eigenfunctions and eigenvalues of the Laplace-Beltrami Operator. They show that if the graph is constructed properly, then a graph neural network of the form (<ref>) will converge to a continuum limit of the form (<ref>) as the number of sample points, n, tends to infinity. However, these results are limited in the sense that (i) they assume specific graph constructions and (ii) their rates of convergence depend exponentially on the depth of the network.
In this work, we introduce a new framework for understanding MNNs that we call
Manifold Filter-Combine Networks. The manifold filter-combine paradigm is meant to parallel the aggregate-combine framework commonly considered in the GNN literature (see, e.g., <cit.>) and naturally leads one to consider many interesting classes of MNNs which may be thought of as the manifold counterparts of various popular GNNs. We then provide sufficient conditions for such networks to converge to a continuum limit as the number of sample points, n, tends to infinity. More specifically, the contributions of this work are:
* We introduce Manifold Filter-Combine Networks as a novel framework for understanding MNNs. This framework readily leads one to many interesting classes of MNNs such as the manifold equivalent of Kipf and Welling's GCN <cit.>, learnable variations of the manifold scattering transform <cit.>, and many others.
* In Theorem <ref>, we provide sufficient conditions for the individual filters used in an MNN to provably converge to a continuum limit as n→∞ if the filtering is done via a spectral approach. Here the rate of convergence depends on the rates at which the eigenvectors/eigenvalues of the graph Laplacian approximate the eigenfunctions/eigenvalues of the Laplace-Beltrami operator as well as the rate at which discrete inner products approximate continuum inner products.
* In Theorem <ref>, we prove that if the individual filters converge as n→∞, then so does the entire MNN. The rate of convergence will depend on (i) the rate of convergence of the individual filters; (ii) the weights used in the network; (iii) the depth of the network. Importantly, we note that our dependence on the depth of the network is linear, rather than the exponential dependence obtained in previous work. Additionally, our rate does not directly depend on the number of filters used per layer. We also note that Theorem <ref> does not assume that the filters have any particular form. Therefore, if one were to prove results analogous to Theorem <ref> for non-spectral filters, then Theorem <ref> would immediately imply the convergence of networks constructed from those filters.
* We then provide several corollaries to Theorem <ref>, which give concrete examples of our results in special cases of interest in Corollaries <ref>, <ref>, <ref>, and <ref>. These results may be summarized as follows:
* If the filters are implemented spectrally, then the discretization error of the entire MFCN tends to zero at a rate depending on how fast the eigenvalues/eigenvectors of the Laplacian corresponding to the data-driven graph 𝐆_n converge to the eigenvalues/eigenfunctions of the continuum Laplacian and how fast discrete inner products converge to continuum inner products.
* If 𝐆_𝐧 is constructed via a Gaussian kernel and the filters are implemented spectrally, then (up to log factors) the discretization error is 𝒪(n^-2/(d+6)).
* If 𝐆_𝐧 is constructed via a k-NN graph or an ϵ-graph and the filters are implemented spectrally, then (up to log factors) the discretization error is 𝒪(n^-1/(d+4)).
§.§ Notation
We let ℳ be a compact, connected, d-dimensional Riemannian manifold with normalized Riemannian volume form μ such that μ(ℳ)=1. We let 𝐋^2(ℳ) denote the set of functions that are square integrable with respect to μ and 𝒞(ℳ) denote the set of continuous functions on ℳ. We let ℒ=-div∘∇ denote the Laplace-Beltrami operator and let {ϕ_i}_i=1^∞ denote an orthonormal basis of eigenfunctions ℒϕ_i=λ_iϕ_i, with
0=λ_1<λ_2≤…. We will use these eigenfunctions to define Fourier coefficients denoted by f(i).
In much of our analysis, we will assume that ℳ is unknown and that we only have access to a function f∈𝒞(ℳ) evaluated at sample points {x_j}_j=1^n⊆ℝ^D. In this setting, we will let P_n:𝒞(ℳ)→ℝ^n be the normalized evaluation operator
(P_nf)(i)=1/√(n)f(x_i),
and let 𝐆_n denote a graph whose vertices are the sample points x_j. We will let 𝐋_n denote the graph Laplacian associated to 𝐆_n and let ϕ_i^n be an orthonormal basis of eigenvectors, 𝐋_nϕ_i^n=λ^n_iϕ_i^n, 0=λ^n_1≤λ^n_2≤…≤λ^n_n. Analogous to the continuous setting, we will use the ϕ_i^n to define discrete Fourier coefficients 𝐱(i).
In this paper, we consider a family of neural networks to process functions defined on ℳ. Towards this end, we will let F=(f_1,…,f_C) denote a row-vector valued function and let F^(ℓ) denote the hidden representation in the ℓ-th layer of our network, with F^(0)=F. When we approximate our network on 𝐆_n, we will instead assume that we are given an n× C data matrix 𝐗=(𝐱_1,…,𝐱_C).
§.§ Organization
The rest of this paper is organized as follows. In Section <ref>, we will provide an overview of spectral convolution on manifolds, explain how to implement such networks on point clouds, and state a theorem providing sufficient criteria for the discrete point-cloud implementation to converge to the continuum limit as the number of sample points tends to infinity. In Section <ref>, we introduce manifold-filter combine networks, discuss several examples of networks contained in our framework, and state a theorem showing that a discrete point cloud implementation converges to the continuum limit as well as several corollaries focusing on specific graph constructions. In Appendices <ref> and <ref>, we will prove the theorems stated in Sections <ref> and <ref>. We will conduct numerical experiments in Section <ref>, before providing a brief conclusion in Section <ref>.
§ SPECTRAL CONVOLUTION ON MANIFOLDS
As alluded to in the introduction, the extension of convolutional methods to the manifold setting is non-trivial because there is no natural notion of translation. Many possible solutions to this problem have been proposed including methods based on parallel transport <cit.>, local patches <cit.>, or Fréchet means <cit.>.
In this section, we will focus on spectral methods that rely on a generalized Fourier transform defined in terms of the eigendecomposition of the Laplace-Beltrami operator.
Let ℳ be a compact d-dimensional Riemannian manifold without boundary, and let ℒ be the Laplace-Beltrami operator on ℳ. It is well-known that ℒ has an orthonormal basis of eigenfunctions {ϕ_i}_i=1^∞ with ℒϕ_i=λ_iϕ_i, λ_i≥ 0. This implies that for f∈𝐋^2(ℳ), we may write
f=∑_i=1^∞f(i) ϕ_i,
where, for 1≤ i <∞, f(i) is the generalized Fourier coefficient defined by ⟨ f,ϕ_i⟩_𝐋^2(ℳ).
Motivated by the convolution theorem in real analysis, we will define manifold convolution as multiplication in the Fourier domain. In particular, give a bounded measurable function w:[0,∞)→ℝ, we define a spectral convolution operator, w(ℒ):𝐋^2(ℳ)→𝐋^2(ℳ) by
w(ℒ)f=∑_i=1^∞ w(λ_i) f(i) ϕ_i.
By Plancherel's theorem, we may observe that
w(ℒ)f_𝐋^2(ℳ)=(∑_i=1^∞ |w(λ_i)|^2|f(i)|^2)^1/2≤w_𝐋^∞([0,∞))f_𝐋^2(ℳ).
Additionally, we note that since these spectral convolution operators are defined in terms of a function w:[0,∞)→ℝ, one may verify that the w(ℒ) does not depend on the choice of the orthonormal basis {ϕ_i}_i=1^∞. (See for example Remark 1 of <cit.>.)
In our analysis of such filters, similar to <cit.> and <cit.>, we will assume that w is Lipschitz, and let A_Lip denote the smallest constant such that for all a,b∈[0,∞) we have
|w(a)-w(b)| ≤ A_Lip(w)|a-b|.
We will also assume that either f or w(ℒ) is bandlimited as defined below.
Let κ>0, let
f∈𝐋^2(ℳ), and let w(ℒ) be a spectral filter. We say that f is κ-bandlimited if f(i)=0 for all i>κ. Similarly, w(ℒ) is said to be κ-bandlimited if w(λ_i)=0 for all i>κ.
§.§ Implementation of Spectral Filters on Point Clouds
In many applications of interest, one does not know the manifold ℳ.
Instead, one is given access to finitely many sample points x_1,…,x_n∈ℝ^D and makes the modeling assumption that these sample points lie upon (or near) an unknown d-dimensional Riemannian manifold for some d≪ D. In this setup, it is non-trivial to actually implement a neural network since one does not have global knowledge of the manifold. Here, we will use an approach based on manifold learning <cit.> where we construct a data-driven graph 𝐆_n, whose vertices are the sample points x_1,…,x_n, and use the eigenvectors and eigenvalues of the graph Laplacian 𝐋_n to approximate the eigenfunctions and eigenvalues of the Laplace-Beltrami operator. As we will discuss below, there are numerous methods for constructing 𝐆_n including k-nn graphs, ϵ-graphs, and graphs derived from Gaussian kernels.
More specifically, we let {ϕ_i^n}_i=1^n be an orthonormal basis of eigenvectors,
𝐋_n ϕ_i^n = λ_i^n ϕ_i^n, 0=λ_1^n≤λ_2^n≤…λ_n^n, and analogous to (<ref>) we will write
𝐱=∑_i=1^n 𝐱(i) ϕ_i^n, 𝐱(i)=⟨𝐱,ϕ^n_i⟩_2
for 𝐱∈ℝ^n.
We then define a discrete approximation of w(ℒ) defined by
w(𝐋_n)𝐱=∑_i=1^∞ w(λ^n_i) 𝐱(i) ϕ^n_i.
Our hope is that if 𝐆_n is constructed properly,
then
w(𝐋_n)P_nf-P_nw(ℒ)f_2 will converge to zero as n tends to infinity, where P_n:𝒞(ℳ)→ℝ^n is the normalized evaluation operator defined as in (<ref>). Notably, in order to bound w(𝐋_n)P_nf-P_nw(ℒ)f_2 we must account for three sources of discretization error:
* The graph eigenvalue λ_i^n does not exactly equal the manifold eigenvalue λ_i. Intuitively, this should yield an error on the order of α_i,nA_Lip(w), where α_i,n=|λ_i-λ_i^n|.
* The graph eigenvector ϕ_i^n does not exactly equal P_nϕ_i, the discretization of the true continuum eigenfunction. One may anticipate this yielding errors of the order β_i,n, where β_i,n=ϕ_i^n-P_nϕ_i_2.
* The discrete Fourier coefficient 𝐱(i) is not exactly equal to f(i). Since Fourier coefficients are defined in terms of inner products, one expects this error to be controlled by a term γ_n which describes how much discrete inner products ⟨ P_n f,P_n g⟩_2 differ from continuum inner products ⟨ f,g⟩_𝐋^2(ℳ).
Combining these sources of error, and letting α_n=max_iα_i,n,β_n=max_iβ_i,n, one anticipates that if either f or w(ℒ) is κ bandlimited, then the total error will be 𝒪(κ(α_nA_Lip(w)+β_n+γ_n)). This intuition is formalized in the following theorem. For a proof, please see Appendix <ref>.
Let w:[0,∞)→ℝ, w_𝐋^∞([0,∞))≤ 1,
let f∈𝐋^2(ℳ) be a continuous function, and assume that either f or w(ℒ) is κ-bandlimited.
Assume that there exist sequences of real numbers {α_n}_n=1^∞, {β_n}_n=1^∞, {γ_n}_n=1^∞, with lim_n→∞α_n=lim_n→∞β_n=lim_n→∞γ_n=0, such that for all 1≤ i ≤κ and for n sufficiently large, we have
|λ_i-λ^n_i|≤α_n, P_nϕ_i-ϕ_i^n_2≤β_n,
|⟨ P_nf, P_ng ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| ≤γ_n^2fg_𝐋^∞(ℳ),
Then for n large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1, we have
w(𝐋_n)P_nf-P_nw(ℒ)f_2≤
C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)).
Furthermore, for all n large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1 and all 𝐱∈ℝ^n, we have
w(𝐋_n)𝐱-P_nw(ℒ)f_2≤𝐱-P_nf_2 +
C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)),
where, in both (<ref>) and (<ref>), C_ℳ is a constant depending on the geometry of ℳ.
In particular, if
𝐱=P_nf, (<ref>) implies that
lim_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2=0.
Inspecting the proof of Theorem <ref>, one may note that A_Lip(w) may actually be replaced by the Lipschitz constant on the smallest interval containing all λ_i and all λ_i^n, 1≤ i ≤κ, where λ_i≠λ_i^n. This means that, if f is bandlimited, our result may be applied to any continuously differentiable function w. Moreover, for most common graph constructions, we have λ_1=λ_1^n=0 and 0<λ_2,λ_2^n. This implies that our theorem can be applied to any w which is continuously differentiable on (0,∞) even if, for example, lim_t→ 0^+w'(t)=+∞ (which is the case for certain wavelets, such as those considered in <cit.>). Additionally, we note that with minor modifications, results similar to Theorem <ref> may be obtained for functions or filters which are approximately bandlimited in the sense that either sup_k>κ|w(λ_k)| or ∑_k>κ|f(k)|^2 are sufficiently small. In these cases, we will have
lim sup_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2 ≤sup_k>κ|w(λ_k)|f_𝐋^2(ℳ)
or lim sup_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2 ≤w_∞(∑_k>κ|f(k)|^2)^1/2. In particular, results similar to Theorem <ref> may be obtained for filters w_t(λ) e^-tλ, which correspond to the heat kernel.
In the following section, we will consider neural networks constructed from spectral filters and use Theorem <ref> to show that discrete approximations of such networks converge to their continuum limit as n→∞. However, first, we will consider several examples of graph constructions where estimates for α_n and β_n are known. In all of the examples below, we will assume that the data points x_i are generated i.i.d. uniformly at random (with respect to the normalized Riemannian volume form μ). In this setting, Lemma 5 of <cit.> implies that with probability at least 1 - 𝒪(1/n^9) we have
γ_n = (18log(n)/n)^1/4.
We note that in <cit.> the inequality (<ref>) was derived via Hoeffding's inequality which is why the definition of γ_n involves the ℓ^∞ norm of fg. However, if one were to use a different method, such as Bernstein's inequality to derive bounds for |⟨ P_nf, P_ng ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| in terms of other norms, then all of our proof techniques could likely be pushed through to obtain results similar to Theorem <ref>.
[Gaussian Kernels]
One simple way to construct a graph is with a Gaussian kernel.
Specifically, given a bandwidth parameter ϵ, we define a weighted adjacency matrix 𝐖_ϵ whose entries are given by
[𝐖_n,ϵ]_i,j = 1/nϵ^1 + d/2e^-𝐱_i - 𝐱_j_2^2 / ϵ
and let 𝐃_n,ϵ be the corresponding diagonal degree matrix. Then the associated graph Laplacian 𝐋_n,ϵ is
𝐋_n, ϵ = 𝐃_n, ϵ-𝐖_n, ϵ.
In this case, if ϵ∼ n^-2/(d+6), and the data points x_i are generated i.i.d. uniformly at random, then Theorem 5.4 of <cit.> implies that, under mild assumptions, we may choose
α_n = C_ℳ n^-2/d+6, β_n = C_ℳ n^-2/d+6√(log(n)),
with probability at least 1 - 𝒪(1/n^9)[For details on how to deduce (<ref>) from Theorem 5.4 of <cit.> we refer the reader to Remark 1 of <cit.> and the proof of Theorem 10 of <cit.>.].
Estimates such as these were used to analyze the convergence of the manifold scattering transform on Gaussian-kernel graphs in <cit.> and more general MNNs in <cit.> and <cit.>.
While constructing a graph from a kernel is simple, it has the drawback of producing dense graphs which pose computational issues for large values of n. Therefore, we also consider two methods for constructing sparse graphs that have previously been analyzed in works such as <cit.>
and <cit.>.
[ϵ-graphs]
Let ϵ>0, let η:[0,∞)→ [0,∞) be a nonincreasing function supported on the interval [0,1] such that η(1/2)>0 and the restriction of η to [0,1] is Lipschitz continuous.
A weighted ϵ-graph is constructed by placing an edge between all x_i,x_j such that |x_i-x_j|≤ϵ. Then, if x_i and x_j are connected by an edge, the corresponding entry in a weighted adjacency matrix is given by
[𝐖_n,ϵ]_i,j=η(|x_i-x_j|/ϵ).
The ϵ-graph Laplacian is then given by
𝐋=c_η/nϵ^d+2(𝐃_n,ϵ-𝐖_n,ϵ),
where c_η is the constant
c_η = ∫_ℝ^d |y_1|^2 η(|y|)dy,
and y_1 is the first coordinate of a vector y ∈ℝ^d, and 𝐃_n,ϵ is the weighted degree matrix corresponding to 𝐖_n,ϵ.
Theorems 2.4 and 2.7 of <cit.> show, for example, that if ϵ is chosen as ϵ∼ ( log(n)/n )^1/d+4, then, under mild assumptions, we may choose
α_n = C_ℳ ( log(n)/n )^1/d+4, β_n = C_ℳ ( log(n)/n )^1/d+4
with probability at least 1 - 𝒪(n^-9). Estimates similar to (<ref>) were used to analyze the convergence of MNNs on ϵ-graphs in <cit.> and <cit.>.
The graph Laplacians of ϵ-graphs are sparse by construction, and their sparsity is indirectly controlled by the length scale parameter ϵ. To directly control the sparsity of the graph Laplacian in an adaptive manner without specifying a length scale, one may also consider k-NN graphs.
[k-NN graphs]
For a positive integer k, symmetric k-Nearest Neighbor (k-NN) graphs are constructed by placing an edge between x_i and x_j if x_j is one of the k closest points to x_i (with respect to the Euclidean distance) or[One might also consider mutual k-NN graphs where we require x_i to be one of the k closest points to x_j and x_j to be one of the k-closest points to x_i. However, such graphs are not analyzed in the theorem we cite from <cit.>.] if x_i is one of the k closest points to x_j. Then, the edges can be given weights in a manner similar to <Ref>.
Formally, let ϵ_k(x_i) denote the distance from x_i to its k-th closest neighbor (with respect to Euclidean distance) and let r_k(x_i,x_j) max{ϵ_k(x_i),ϵ_k(x_j)}.
Then, if x_i and x_j are connected by an edge in the k-NN graph, the corresponding entry in a weighted adjacency matrix is given by
[𝐖_n,k]_i,j = η ( |x_i - x_j|/r_k(x_i,x_j) )
where η satisfies the same assumptions as in <Ref>. Note that if η(t) = χ_[0,1](t), then we obtain the standard unweighted k-NN graph. The k-NN graph Laplacian is then given by
𝐋_n,k=c_η/n(nc_d/k)^1+2/d(𝐃_n,k-𝐀_n,k),
where c_η is defined as in <Ref>, c_d is the volume of the d-dimensional Euclidean unit ball, 𝐖_n,k is the unweighted adjacency matrix associated with the k-NN graph, and 𝐃_n,k is the corresponding degree matrix. If η(t) = χ_[0,1](t), then c_η = c_d/d+2.
Theorems 2.5 and 2.9 of <cit.> show that, for example,
if k is chosen as k ∼log(n)^d/d+4 n^4/d+4, then, under mild assumptions, we may choose
α_n = C_ℳ ( log(n)/n )^1/d+4, β_n = C_ℳ ( log(n)/n )^1/d+4
with probability at least 1 - 𝒪(n^-9). Corollary <ref> stated in Section <ref> applies these estimates to establish the convergence of MFCNs for k-NN graphs. To the best of our knowledge, this is the first result to establish a quantitative rate of convergence for MNNs in this setting.
Comparing the examples above, we see that the rates of convergence are faster for dense graphs. Therefore, they may be preferable when n is only moderately large, but one still desires a good approximation of the continuum. However, for very large n, dense graphs become expensive to store in memory. Therefore, one might instead prefer to utilize either ϵ- or k-NN graphs. We also note that the theorems discussed above do not explicitly guarantee that P_nϕ_i≈ϕ_i^n. Instead, they show that P_nϕ_i≈±ϕ_i^n. However, as discussed earlier our spectral filters do not depend on the choice of orthonormal basis. Therefore, we may ignore this issue when applying Theorem <ref>.
§ MANIFOLD FILTER-COMBINE NETWORKS
In this section, we introduce a novel framework for thinking about manifold neural networks. We will refer to the networks we consider as Manifold Filter-Combine Networks paralleling the aggregate-combine framework commonly used in the graph setting (see, e.g., <cit.>). Here, we will use the term filter, rather than aggregate because our filters may be arbitrary linear operators on 𝐋^2(ℳ) (which in most examples will be defined in terms of some notion of convolution) and are not required to be localized averaging operations. Much of our analysis (except for Theorem <ref>) focuses on the case that the filtering step is implemented in the spectral domain. In this case, the class of all MFCN coincides with the class of MNNs considered in previous work such as <cit.>. However, even in the spectral case, we find that the filter-combine paradigm is a useful framework for thinking about MNNs since it naturally leads one to many interesting subclasses of networks and also allows us to obtain convergence rates that do not directly depend on the width of the network.
We will assume that our input data is a row-vector[We define the output of F to be ℝ^1× C in order to highlight the parallels with the data matrices commonly considered in the GNN literature where rows correspond to vertices and columns correspond to features.] valued function F∈𝐋^2(ℳ,ℝ^1× C), F=(f_1,…,f_C), where each f_i∈𝐋^2(ℳ).
Each hidden layer of the network will consist of the following five steps:
(i) filtering each input channel f_k by a family of linear operators W_j, 1≤ j≤ J, (ii) For each fixed j, we combine the filtered feature functions f̃_j,k=(W_jf_k) into new feature functions g_j,k where each g_j,k is a linear combination of the f̃_j,k, (iii) For each fixed k, we perform a cross-channel convolution that maps { g_j,k}_j=1^J to {g̃_j,k}_j=1^J' where each g̃_j,k is a linear combination of the g_j,k, (iv) apply some non-linear, nonexpansive pointwise activation function σ to each of the g̃_j,k, to obtain h_j,k=σ∘g̃_j,k, (v) reshape the collection of functions {h_i,j}_1≤ i ≤C̃,1≤ j≤ J' into {f'_i}_i=1^C', where C'=C̃J'.
In many applications, it may be sufficient to use a common filter bank {W_j}_1≤ j≤ J for all input channels. However, in other settings, it may be useful to give the network additional flexibility to learn different filters along different input signals. Therefore, for the sake of generality, we actually define the filtering step by f̃_j,k=(W_j,kf_k), where for each fixed k, {W_j,k}_1≤ j ≤ J is a collection of linear operators (i.e., filters) to be applied to the input channel f_k.
Explicitly, we define our layerwise update rule in the following manner. Let F^(0)=F, C_0=C and given F^(ℓ)=(f_1^(ℓ),…,f_C_ℓ^(ℓ)), we define F^(ℓ+1)=(f_1^(ℓ+1),…,f_C_ℓ+1^(ℓ+1)) via:
Filtering: f̃^(ℓ)_j,k=W^(ℓ)_j,kf^(ℓ)_k, 1≤ j ≤ J_ℓ, 1≤ k≤ C_ℓ
Combine:
g_j,k^(ℓ)=∑_i=1^C_ℓf̃^(ℓ)_j,iθ^(ℓ,j)_i,k, 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ
Cross-Channel Convolution: g̃_j,k= ∑_i=1^J_ℓα^(ℓ,k)_j,ig_i,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ
Activation:
h_j,k^(ℓ)=σ^(ℓ)∘g̃_j,k^(ℓ), 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ
Reshaping:
f^(ℓ+1)_(j-1)C_ℓ+k = h^(ℓ)_j,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ,
where C_ℓ+1=J'_ℓ C_ℓ', and the reshaping operator allows for multiple layers to be stacked upon each other.
Importantly, we note one may effectively omit the combine step by setting the matrix Θ^(ℓ,j)(θ_i,k^(ℓ,j))_1≤ i,k≤ C_ℓ equal to the identity matrix for each ℓ and j.
Similarly, one may omit the cross-channel convolutions by setting the matrices (α_j,i^(ℓ,k))_1≤ i,j≤ J_ℓ to the identity.
Additionally, we note that since we allow for the possibility of using different filters along each channel, it is, in general, possible to write the same network as an MFCN in more than one way.
For instance, if one fixes the cross channel convolutions equal to the identity, uses a shared filter bank {W^(ℓ)_j}_1≤ j ≤ J (independent of k) and chooses the combine step to be independent of j (i.e. θ_i,k^(ℓ,j)=θ_i,k^(ℓ)) then we have
f^(ℓ+1)_(j-1)C_ℓ+k = σ^(ℓ)(∑_i=1^C_ℓW^(ℓ)_jθ^(ℓ)_i,kf_i),
which may also be obtained by using filters of the form
W^(ℓ)_(j-1)C_ℓ+k,i=W_jθ^(ℓ)_i,k and using a combine step with θ̃_i,k^(ℓ,j)=1.
Therefore, the set of networks that may be obtained by setting θ_i,k^(ℓ,j)=1 is just as large as the set of all MFCN. A similar conclusion holds for the cross-channel convolutions. Therefore, in the case where all filters are implemented in the spectral domain, the class of MFCNs is actually the same as the class of MNNs considered in previous work such as <cit.> (see Example <ref> below).
However, as alluded to earlier, we find that thinking of the filtering, combination, and cross-channel convolutions steps separately is a useful framework for a couple of reasons. First, it facilitates our mathematical analysis of the convergence rate obtained in Corollary <ref> and in particular allows us to produce rates that depend only linearly on the depth of the network and do not directly depend on the network's width. Second, it highlights a variety of natural subclasses of networks that may be useful for various data sets or tasks of interest. For instance, each piece of the architecture can either be designed in advance or learned from data. Moreover, one may choose to use a common filter bank W_j, 1≤ j≤ J for all input functions and in all layers or one may choose to use different filters in each layer and/or for each signal. Below we will consider several examples of such classes, but first, we remark that our analysis does not depend on the order in which the steps are performed. Therefore, the theoretical guarantees obtained in Theorem <ref> and Corollary <ref> also apply, for example, to networks in which the cross-channel convolutions occur after the activation.
Additionally, we note that one may make different choices in each layer. For example, one may use a hand-crafted filter bank in the first several layers and then a learnable filter bank in the later layers. Similarly, the activation functions may vary from one layer to the next. However, we will often depress the dependence of the activation function on the layer and simply write σ in place of σ^(ℓ).
[Different Filters Along Each Channel]
If we set the cross-channel convolution equal to the identity, set C_ℓ'=1 and set θ_i,k^(ℓ,j)=1 then we obtain the layerwise update rule
f^(ℓ+1)_j=σ(∑_j=1^CW^(ℓ)_j,kf_k).
If each of the W_j,k^(ℓ)=w^(ℓ)_j,k(ℒ) is a spectral filter (as defined in Section <ref>), we then obtain the layerwise update rule
f^(ℓ+1)_j=σ(∑_j=1^Cw^(ℓ)_j,k(ℒ)f_k).
which was introduced in <cit.> and has been subsequently studied in <cit.>. Notably, in this example the reshaping operator is the identity (since C'_ℓ=1)) and the filters W_j,k^(ℓ) depend on both the layer ℓ and the input channel k.
As mentioned above (see the discussion surrounding (<ref>)), this class of networks is the most general and actually includes all MFCNs. However, considering, e.g., the filter and combine steps separately helps facilitate our analysis. For instance, our rate of convergence obtained in Theorem <ref> depends on max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), but unlike the results obtained in previous work does not directly depend on the width of the network.
In particular, if we set θ_i,k^(ℓ,j)=1/C_ℓ, then we have max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|)=1.
[Shared Filter Banks Along Each Channel]
In order to reduce the number of trainable parameters, it may be useful to utilize a (learned) filter bank which is shared across all input channels and a combination matrix which is shared across all filters. In this case, one obtains a layerwise update rule of the form (<ref>). Such networks may loosely be thought of as a low-rank subset of the more general networks discussed in Example <ref>. (In this setting, since the filter banks are learned, there is still no need for cross-channel convolutions.)
Due to the irregularity of the data geometry, many popular GNNs such as the GCN of Kipf and Welling <cit.> use predesigned aggregations and incorporate learning through the combine steps. The next example discusses the analog of such networks on manifolds.
[MCNs]
Set the cross-channel convolutions equal to the identity and let J=J'=1. Let A be a fixed operator which should be thought of as either a low-pass filter or a localized averaging operator, and set W^(ℓ)_i,1=A for all i. Let the matrix Θ^(ℓ) = (θ^(ℓ,1)_i,k)_1≤ i≤ C_ℓ,1≤ k ≤ C'_ℓ be a learnable weight matrix. Then our layerwise update rule becomes f_k^ℓ+1=∑_i=1^C_ℓAf_kθ_i,k^(ℓ,1) which may be written compactly as
F^(ℓ+1)=σ(AF^(ℓ)Θ^(ℓ)).
Therefore, we obtain a network similar to the GCN of Kipf and Welling which we refer to as the manifold convolutional network (MCN). Notably, A can be designed in a variety of ways, but one possible choice is to define it in the spectral domain where w is a non-increasing function such as an idealized low-pass filter w(λ)=1_λ≤ a or setting w(λ)=e^-tλ which corresponds to convolution against the heat kernel.
Additionally, one could consider the filter bank consisting of powers of A, i.e. W^(ℓ)_j=A^j, 1≤ j ≤ J, use a different combine matrix in each channel, and employ a simple cross-channel convolution by setting α_j,i^(ℓ,k)=1. In this case, one obtains a layerwise update rule of the form F^(ℓ+1)=σ(∑_j=1^JA^JF^(ℓ)Θ^(ℓ,j)), which can be thought of the manifold analog of the higher-order GCNs considered in work such as <cit.>.
Similar to the above example, one could also consider the manifold analogs of other popular spectral GNNs such as ChebNet<cit.> or CayleyNet<cit.>. Our framework also includes the manifold scattering transforms.
[Hand-Crafted Scattering Networks]
Let {W_j}_j=1^J be a predesigned collection of filters, which are thought of as wavelets and do not depend on the layer or the input channel. Set the combine and cross-channel convolutions
equal to the identity. One then obtains an entirely predesigned, multilayered network known as the manifold scattering transform. Such networks were considered in <cit.> in order to analyze the stability of and invariance properties of deep learning architectures defined on manifolds, building off of analogous work for Euclidean data <cit.> and graphs <cit.>.
[Learnable Scattering Networks]
For both Euclidean data and graphs, there have been a variety of papers that have introduced learning into the scattering framework.
In the Euclidean setting, <cit.> created a network that acts as a hybrid of the scattering transform and a CNN using predesigned, wavelet filter in some layers and learnable filters in others. Subsequent work by <cit.> introduced learning in a different way, incorporating cross-channel convolutions into an otherwise predesigned network. One may construct an analogous MFCN that corresponds to utilizing a predesigned filter bank {W_j}_j=1^J which is shared across all channels, setting the combine step equal to the identity, and letting α_j,i^(ℓ,k) be learnable. (Traditionally, scattering networks have used |·| as the activation function, but one could readily use other choices instead.)
In the graph setting, <cit.> incorporated learning into the scattering framework by utilizing using predesigned wavelet filters, but learnable combine matrices (along with a few other features to boost performance). In a different approach, <cit.> sought to relax the graph scattering transform by replacing dyadic scales 2^j with an increasing sequence of scales t_j which are learned from data via a selector matrix. To obtain an analogous MFCN, we set W_j=e^-jℒ for 0≤ j ≤ J, which diffuses the input signal over the manifold at different time-scales, corresponding to the diffusion module utilized in <cit.>. We then set the combination step equal to the identity and learn relationships between the diffusion scales via cross-channel convolutions (where the cross-channel convolutions utilized in <cit.> have a certain structure that encourages the network to behave in a wavelet-like manner). Additionally, as has previously been noted in <cit.>, these two forms of learnable geometric scattering are compatible and one could readily utilize learnable combine steps while also using cross-channel convolutions to learn relationships between diffusion scales.
Lastly, we also note that our framework includes simple multilayer perceptrons.
[Multilayer Perceptron]
If one sets J_ℓ=1 and sets both W_1,k^(ℓ) and the cross-channel convolution to be the identity operator then one obtains a simple dense layer that does not utilize the geometry of the manifold. In some sense, this is contrary to our goal of developing networks that utilize the manifold structure of the data. However, including some simple dense layers might nevertheless be useful for, for example, reducing the number of channels in the network.
§.§ Implementation from point clouds
As alluded to earlier, in many applications one does not have global knowledge of the manifold ℳ and merely has access to n data points {x_j}_j=1^n and evaluations of F at those data points. This leads us to recall the normalized evaluation operator
(P_nf)(j)=1/√(n)f(x_j) and
approximate
F by an n× C data matrix 𝐗=(𝐱_1,…,𝐱_C), where 𝐱_k=P_nf_k. One may then implement an approximation of the network via the discrete update rules.
Filtering: 𝐱̃^(ℓ)_j,k=𝐖^(ℓ)_j,k𝐱^(ℓ)_k, 1≤ j ≤ J_ℓ, 1≤ k≤ C_ℓ
Combine: 𝐲_j,k^(ℓ)=∑_i=1^C_ℓ𝐱̃^(ℓ)_j,iθ^(ℓ,j)_i,k, 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ
Cross-Channel Convolution: 𝐲̃^(ℓ)_j,k= ∑_i=1^J_ℓα^(ℓ,k)_j,i𝐲_i,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ
Activation: 𝐳_j,k^(ℓ)=σ∘𝐲̃_j,k^(ℓ), 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ
Reshaping: 𝐱^(ℓ+1)_(j-1)C_ℓ+k = 𝐳^(ℓ)_j,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ
where 𝐖_j,k^(ℓ) is a matrix which acts as a discrete approximation of W_j,k^(ℓ).
The following theorem shows that the discrete implementation will converge to its continuum counterpart in the sense that P_n F^(ℓ)≈𝐗^(ℓ) if the matrices 𝐖_j,k^(ℓ) are designed so that 𝐖_j,k^(ℓ)P_n f_k^(ℓ)≈ P_n W_j,kf_k^(ℓ). For a proof, please see Appendix <ref>.
Let f ∈𝒞(ℳ), and suppose that for all ℓ, there exists ϵ_ℓ>0 such that we have
P_nW_j,k^(ℓ)f_k^(ℓ)-𝐖^(ℓ)_j,k𝐱^(ℓ)_k_2 ≤𝐱^(ℓ)_k-P_nf_k^ℓ_2+ ϵ_ℓ,n
for all 1≤ k ≤ C_ℓ.
Let A_1^(ℓ)=max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), A_2^(ℓ)=max_j,k(∑_i=1^J_ℓ |α_j,i^(ℓ,k)|) and assume that σ is non-expansive, i.e. |σ(x)-σ(y)|≤ |x-y|.
Then,
𝐱_k^ℓ-P_nf_k^ℓ_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j)ϵ_i,n.
Notably, Theorem <ref> does not assume the filters are constructed in the spectral domain nor does it assume they have any particular form. It is a general result that shows that if individual filters converge, then so does the multilayer network. Moreover, if the weights α_j,i^(ℓ,k) and θ_j,i^(ℓ,j) are normalized so that the A_1^(j)=A_2^(j)=1, then the rate of the convergence is linear in the depth of the network. This is in contrast to previous results in <cit.> whose rate of convergence featured an explicit exponential dependence on the depth of the network. (A similar exponential dependence was also encountered in <cit.> where the limiting object is a graphon rather than a manifold.)
Combining Theorem <ref> with Theorem <ref> immediately leads to the following corollary which gives a quantitative rate of convergence for Manifold Filter-Combine Networks constructed utilizing spectral filters when either the filter or the input signals are bandlimited. Notably, if one proves theorems analogous to Theorem <ref> for other classes of filters (constructed either by spectral or not spectral methods)
such as the α-FDT filters considered in <cit.> or the closely related γ-FDT filters considered in <cit.>, then one may immediately obtain similar corollaries.[Such results were obtained for α-FDT filters with specific graph constructions in <cit.>.]
Assume that each W_j,k^(ℓ) is a spectral filter of the form W_j,k^(ℓ)=w_j,k^(ℓ)(ℒ) with w_j,k^(ℓ)_𝐋^∞([0,∞))≤ 1, and the matrices 𝐖_j,k are given by 𝐖_j,k^(ℓ)=w_j,k^(ℓ)(𝐋_n). As in Theorem <ref>,
let A_1^(ℓ)=max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), A_2^(ℓ)=max_j,k(∑_i=1^C_ℓ |α_j,i^(ℓ,k)|) and assume that σ is non-expansive, i.e. |σ(x)-σ(y)|≤ |x-y|.
Let A^(ℓ)_maxLip=max_j,k,A_Lip(w^(ℓ)_j,k).
Assume that there exist sequences of real numbers {α_n}_n=1^∞, {β_n}_n=1^∞, {γ_n}_n=1^∞, with lim_n→∞α_n=lim_n→∞β_n=lim_n→∞γ_n=0, such that
|λ_i-λ^n_i|≤α_n, P_nϕ_i-ϕ_i^n_2≤β_n,
|⟨ f, g ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| ≤γ_n^2fg_𝐋^∞(ℳ),
Assume n is large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1. Then, the error in each channel of the ℓ-th layer satisfies
𝐱_k^ℓ-P_nf_k^ℓ_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j)
C_ℳκmax_k'((A^(i)_maxLipα_n+β_n)f^(i)_k'_𝐋^2(ℳ)+γ_nf^(i)_k'_𝐋^∞(ℳ)).
In particular, if we assume that we have A_1^(j), A_2^(j), A^(i)_maxLip≤ 1, for all i and j we have
𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ((α_n+β_n)max_k',if^(i)_k'_𝐋^2(ℳ)+γ_nmax_k',if^(i)_k'_𝐋^∞(ℳ)).
In <Ref>, we provided several examples of α_n, β_n, and γ_n for three graph constructions. Using <Ref>, we immediately obtain the following three corollaries giving rates of convergence for each of these constructions.
Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven graph 𝐆_n constructed as in <Ref> with a Gaussian kernel. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies
𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ(√(log(n))/n^2/(d+6)max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)).
Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven ϵ-graph 𝐆_n constructed as in <Ref>. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies
𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ( ( log(n)/n )^1/d+4max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)).
Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven k-NN graph 𝐆_n constructed as in <Ref>. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies
𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ( ( log(n)/n )^1/d+4max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)).
§ NUMERICAL EXPERIMENTS
In this section, we compare the performance of three different examples of manifold filter-combine networks on the ModelNet dataset<cit.>. In particular, we focus on the MNN with different learnable filters in each channel (DLF), the MCN, and the manifold scattering transform (Scattering) discussed in Examples <ref>, <ref>, and <ref>. The code for reproducing our experiments is available at <https://github.com/KrishnaswamyLab/mfcn>.
§.§ Data
We used the ModelNet10 dataset which consists of three-dimensional point clouds sampled from various objects belonging to the classes bathtub, bed, chair, desk, dresser, monitor, nightstand, sofa, table, and toilet. Examples of point clouds in the dataset are given in Figure <ref>. For each point cloud, we preprocess the data by scaling the point coordinates (z-scaling), then randomly sample 100 points from the whole point cloud. We then create a graph via the constructions discussed in Examples <ref>, <ref>, and, <ref>, i.e., Gaussian kernels (dense), ϵ-graphs, and unweighted k-NN graphs. We use the x, y, and z coordinates of the nodes as input signals. The ModelNet10 dataset comes with a predefined training set (3901 samples) and test set (799 samples). In our experiments, we randomly select 20% of the training set to use for validation. We then consider two regimes. In the full data regime, we use the entire remaining 80% of for training. In the subset data regime, we randomly select 1000 samples from that 80% to use for training. We repeat this procedure five times and report our accuracies in the format mean ± std.
§.§ Models
In our experiments, we consider three manifold neural network architectures as described below. For each model, we used two layers of manifold networks, followed by a multi-layer perceptron classifier consisting of a single hidden layer. For further details of our hyperparameter settings and training procedures please see Table <ref> in Appendix <ref>.
Scattering
We follow the experimental procedure utilized in <cit.> and compute zeroth-, first-, and second-order scattering moments. More specifically,
for 0≤ j≤ J and 1≤ q≤ Q, we define first-order, q-th scattering moments by
Sf[j,q]∫_ℳ|W_jf(x)|^qdx=W_jf_𝐋^q(ℳ)^q,
where W_j are spectral wavelet filters corresponding to the functions w_j(λ)=e^2^j-1λ-e^2^jλ for 1≤ j≤ J and w_0(λ)=1-e^-λ.
We define second-order moments, for 0≤ j<j'≤ J, by
Sf[j,j',q]∫_ℳ|W_j'|W_jf(x)||^qdx=W_j'|W_jf|_𝐋^q(ℳ)^q.
Zeroth-order moments are defined simply by
Sf[q]∫_ℳ|f(x)|^qdx=f_𝐋^q(ℳ)^q.
In our experiments, we set J=8, Q=4 and use the first 20 eigenvalues and eigenvectors of the graph Laplacian to implement the spectral wavelet filters.
DLF We used two layers of DLF, where each layer consists of J_ℓ spectral filters (J_1=16, J_2=32). After applying the J_ℓ filters per input dimensions, we combined the channels by summation (i.e., θ^(ℓ,j)_i,k = 1). Similarly, as for scattering, we used the first 20 eigenvalues and eigenvectors of the Laplacian matrix to compute our filters. We used a ReLU activation and the identity map for the cross-channel convolution. We used average pooling at the last layer to obtain the feature vector to be processed by the classifier.
We considered two parameterizations of the filters w(λ), one denoted DLF-MLP, where we parametrize each w(λ) as a 2-layer MLP, and the other denoted DLF-POLY, in which we parameterize each w(λ) as a degree-four polynomial of e^-λ (which is the parameterization utilized in, e.g., <cit.>).
MCN We used two layers of graph convolutional networks with J_l (J_1=16, J_2=32) hidden dimension applied to the input graph with ReLU activations. As in <cit.>, our low-pass filter was implemented by 𝐀̂=(𝐃+𝐈)^-1/2(𝐀+𝐈)(𝐃+𝐈)^-1/2 which is equivalent to applying the spectral filter w(λ)=1-λ/2 to the normalized graph Laplacian and then utilizing a renormalization trick in order to facilitate the learning process. We used a ReLU activation and the identity map for the cross-channel convolution. We used average pooling at the last layer to obtain the feature vector to be processed by the classifier.
§.§ Results
We compared the performance of the different models and graph construction based on the classification accuracy on the left-out test set. In Table <ref>, we report the mean and standard deviation of the test accuracy across the five different splits (5-folds) for both the full and subset data regimes.
All of the models consistently perform much better than random chance (which is roughly 10% accuracy since there are ten classes) but are all far from 100% accuracy. In particular, in the full data regime, accuracy levels range from 54% to 75% and from 44% to 70% in the subset data regime.
Overall the two versions of DLF are the best performing methods, particularly on the Dense graphs and the Epsilon Graphs. We note that DLF-MLP outperforms DLF-POLY in four out of six cases, but has the drawback of requiring more parameters. On the k-NN graphs, MCN performs nearly as well as DLF, but is the least accurate method on the dense graph construction. Scattering is overall the lowest performing method. However, its performance is the least affected by the number of samples. For instance, on the dense graph construction, it loses four percentage points of accuracy compared to MCN and DLF which lose ten and nine points. This suggests that the wavelet filters are useful geometric descriptors, but that overly hand-crafted networks lack the flexibility to learn from data.
§ CONCLUSION
We have introduced a new framework for analyzing and implementing manifold neural networks that we call manifold filter-combine networks. This framework naturally allows us to think about many interesting classes of MNNs such as the manifold analogs of GCNs and several relaxed variations of the manifold scattering transform. Additionally, we have provided methods for implementing such networks when one does not have global knowledge of the manifold, but merely has access to n sample points, that converge provably to their continuum limit as n→∞. In order to establish this result, we also prove a theorem establishing sufficient convergence conditions for the individual filters used in the network. This result is not specific to any particular graph construction. Instead, it shows that if the eigenvectors and eigenvalues of the graph Laplacian converge (and additionally that discrete inner products converge to continuum inner products) then spectral filters constructed from the graph Laplacian will converge as well. This allows our results to be applied to a wide variety of graph constructions including those discussed in Examples <ref>, <ref>, and <ref>.
The flexibility of our setup is deliberate. The development of manifold neural networks is in its infancy, even compared to graph neural networks, and there are many questions about which networks will perform best in practice. Should networks use learnable filter banks similar to a CNN or predesigned averaging operations similar to a common aggregate-combine network?
Are cross-channel convolutions a viable way to introduce learning in settings where there are no nontrivial relations between input channels?
In this work, we do not claim to provide an answer to the question “what are the best ways to design a manifold neural network?" which ultimately will need to be answered through thorough experimentation. The purpose of this paper is instead to facilitate this experimentation by providing a useful framework for thinking about MNNs. We also note several other important areas of future work. (i) In examples <ref>, <ref>, and <ref>, we consider settings where the data points {x_i} lie exactly on the manifold and are sample i.i.d. uniformly at random. Relaxing these assumptions would greatly increase the applicability of our theory to noisy real-world data. (ii) Most of the data sets used in the MNN literature focus on two-dimensional surfaces. Developing challenging and relevant benchmarks for learning on higher-dimensional manifolds would help facilitate the experimental exploration of various MNN architectures.
§ ACKNOWLEDGEMENT
The authors thank Luana Ruiz for helpful discussion that greatly improved the quality of our exposition.
plain
§ THE PROOF OF THEOREM <REF>
We first note that if either w or f is κ bandlimited, we have
w(𝐋_n)P_nf-P_nw(ℒ)f_2
= ∑_i=1^κ w(λ_i^n)⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n - ∑_i=1^κ w(λ_i)⟨ f,ϕ_i⟩_ℳP_nϕ_i_2
≤ ∑_i=1^κ (w(λ_i^n)-w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2+∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n- ⟨ f,ϕ_i⟩_ℳP_nϕ_i)_2.
To bound the first term from (<ref>), we note that by the triangle inequality, the Cauchy-Schwarz inequality, and the assumption that n is large enough so that α_n≤ 1, we have
∑_i=1^κ (w(λ_i^n) - w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2
≤ max_1≤ i ≤κ |w(λ_i^n)- w(λ_i)| ∑_i=1^κP_n f_2 ϕ_i^n^2_2
≤ A_Lip(w)α_n ∑_i=1^κP_n f_2 ϕ_i^n^2_2
≤ A_Lip(w)κα_n P_n f_2
≤ A_Lip(w)κ(α_n f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)),
where we use the fact that ϕ_i^n_2^2=1 and that
P_nf_2≤(f_𝐋^2(ℳ)^2 + γ_n^2f_𝐋^∞(ℳ)^2)^1/2≤f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ).
Now, turning our attention to the second term from (<ref>), we have
∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n- ⟨ f,ϕ_i⟩_𝐋^2(ℳ)P_nϕ_i)_2
≤ ∑_i=1^κ w(λ_i)⟨ P_nf,ϕ_i^n⟩_2(ϕ_i^n-P_nϕ_i)_2
+∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_𝐋^2(ℳ)P_nϕ_i_2.
By the assumption (<ref>), we have ϕ_i^n-P_nϕ_i_2≤β_n.
Therefore, since w non-amplifying, we see
∑_i=1^κ w(λ_i)⟨ P_nf,ϕ_i^n⟩_2(ϕ_i^n-P_nϕ_i)_2
≤κmax_1≤ i≤κ |⟨ P_nf,ϕ_i^n⟩_2|ϕ_i^n-P_nϕ_i_2
≤κβ_nP_nf_2
≤κβ_n (f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ) ),
where the final inequality follows from (<ref>).
Meanwhile, the second term from (<ref>) can be bounded by
∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ)P_nϕ_i_2
≤ ∑_i=1^κ |w(λ_i)| |⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2
≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2
≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2|P_nϕ_i_2+∑_i = 1^κ |⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2.
By the Cauchy-Schwarz inequality, (<ref>), (<ref>), and the assumption that n is large enough so that β_n≤ 1, we have
|⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2| ≤ P_nf_2 ϕ_i^n-P_nϕ_i_2≤β_n(f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))≤(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)).
And also by (<ref>) we have
|⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2| ≤γ_n^2f_𝐋^∞(ℳ)ϕ_i_𝐋^∞(ℳ), and P_nϕ_i_2≤ 1+γ_nϕ_i_𝐋^∞(ℳ).
It is known (see, e.g., Appendix L of <cit.> and the references there) that ϕ_i_𝐋^∞(ℳ)≤ C_ℳ i^(d-1)/2d≤ C_ℳi^1/2. Therefore, for all i≤κ the assumption that n is large enough that γ_nκ^1/2≤ 1 implies
|⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2| ≤ C_ℳγ^2_nκ^1/2f_𝐋^∞(ℳ)≤ C_ℳγ_n, and P_nϕ_i_2≤ 1+γ_n κ^1/2≤ 2.
Therefore, if n is large enough such that γ_nκ^1/2<1, then the second term from (<ref>) can be bounded by
∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2 -⟨ f,ϕ_i⟩_ℳ)P_nϕ_i_2
≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2|P_nϕ_i_2 +∑_i=1^κ|⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2|P_nϕ_i_2
≤ ∑_i=1^κ(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))P_nϕ_i_2 +∑_i=1^κ C_ℳγ_nf_𝐋^∞(ℳ)P_nϕ_i_2
≤ C_ℳ(κ(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) + γ_nκf_𝐋^∞(ℳ))
≤ C_ℳκ( β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)).
Therefore, combining Equations (<ref>) through (<ref>) yields
w(𝐋_n)P_nf-P_nw(ℒ)f_2
≤ ∑_i=1^κ (w(λ_i^n) - w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2+∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n-⟨ f,ϕ_i⟩_ℳP_nϕ_i)_2
≤ A_Lip(w)κ (α_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))+ C_ℳ(κβ_n(f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) + γ_nκf_𝐋^∞(ℳ))
≤
C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))
thus completing the proof of (<ref>).
To prove (<ref>), we observe that since w_𝐋^∞([0,∞)), we have
w(𝐋_n)𝐱-w(𝐋_n)P_nf_2 ≤𝐱-P_nf_2
by the same reasoning as (<ref>).
Therefore, by the triangle inequality, we have
w(𝐋_n)𝐱-P_nw(ℒ)f_2
≤ w(𝐋_n)𝐱-w(𝐋_n)P_nf_2 +
w(𝐋_n)P_nf-P_nw(ℒ)f_2
≤ 𝐱-P_nf_2 +
C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))
as desired.
§ THE PROOF OF THEOREM <REF>
In order to prove Theorem <ref>, we need the following lemma which bounds the error in each step.
The errors induced by the non-filtering steps of our network may be bounded by
𝐲_j,k^(ℓ)-P_ng_j,k^(ℓ)_2
≤max_1≤ i≤ C_ℓ𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|,
𝐲̃_j,k^(ℓ)-P_ng̃_j,k^(ℓ)_2
≤max_1≤ i≤ J_ℓ𝐲^(ℓ)_j,k-P_n g^(ℓ)_j,k_2∑_i=1^J_ℓ |α_j,i^(ℓ,k)|.
𝐳^(ℓ)_j,k-P_nh^(ℓ)_j,k_2 ≤𝐲̃^(ℓ)_j,k-P_ng̃^(ℓ)_j,k_2
To verify (<ref>), we observe that
𝐲_j,k^(ℓ)-P_ng_j,k^(ℓ)_2
=∑_i=1^C_ℓ𝐱̃^(ℓ)_j,kθ_i,k^(ℓ,j)-P_nf̃^(ℓ)_j,kθ_i,k^(ℓ,j)_2
≤∑_i=1^C_ℓ|θ_i,k^(ℓ,j)|𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2
≤max_1≤ i≤ C_ℓ𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|.
The proof of (<ref>) is identical to the proof of (<ref>). For (<ref>), we see that
since σ is non-expansive we have
𝐳^(ℓ)_j,k-P_nh^(ℓ)_j,k^2_2
=∑_i=1^n|
(𝐳^(ℓ)_j,k)(i)-(P_nh^(ℓ)_j,k)(i)|^2
=∑_i=1^n|
(𝐳^(ℓ)_j,k)(i)-h^(ℓ)_j,k(x_i)|^2
=∑_i=1^n|
σ((𝐲̃^(ℓ)_j,k)(i))-σ(g̃^(ℓ)_j,k(x_i))|^2
≤∑_i=1^n|
(𝐲̃^(ℓ)_j,k)(i)-g̃^(ℓ)_j,k(x_i)|^2
=𝐲̃^(ℓ)_j,k-P_ng̃^(ℓ)_j,k^2_2.
It follows from the definition of the reshaping operator that
max_k𝐱_k^(ℓ+1)-P_nf_k^(ℓ+1)_2
= max_j,k𝐳^(ℓ)_p,r-P_nh^(ℓ)_p,r_2.
Therefore, by Lemma <ref> we have
max_k𝐱_k^(ℓ+1)-P_nf_k^(ℓ+1)_2
= max_j,k𝐳^(ℓ)_p,r-P_nh^(ℓ)_p,r_2.
≤ P_ng̃^(ℓ)_j,k-𝐲̃^(ℓ)_j,k_2.
≤ A^(ℓ)_2max_j,kP_n g^(ℓ)_j,k-𝐲^(ℓ)_j,k_2
≤ A^(ℓ)_2A^(ℓ)_1max_j,kP_n f̃^(ℓ)_j,k-𝐱̃^(ℓ)_j,k_2
≤ A^(ℓ)_2A^(ℓ)_1(max_k𝐱_k^(ℓ)-P_nf_k^(ℓ)_2
+ϵ_ℓ,n)
Since 𝐱_0^(ℓ)-P_nf^(0)_k_2=0 for all k,
we may use induction to conclude that
𝐱^(ℓ)_k-P_nf^(ℓ)_k_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j)ϵ_i,n.
§ TRAINING AND IMPLEMENTATION DETAILS
We trained all three models by minimizing the cross-entropy loss between predicted probabilities for each of the 10 categories and the ground truth category of each point cloud. We used the Adam optimizer for 200 epochs with a batch size of 32. The learning rate was selected according to validation performance and was chosen among 0.01 and 0.001. For each model, we used two layers of manifold networks, followed by a multi-layer perceptron classifier consisting of a single hidden layer. The hyper-parameters specific to each model and graph construction scheme are given in Table <ref>.
|
http://arxiv.org/abs/2307.04802v1 | 20230710180016 | Out-of-equilibrium dynamics of quantum many-body systems with long-range interactions | [
"Nicolò Defenu",
"Alessio Lerose",
"Silvia Pappalardi"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"cond-mat.stat-mech",
"cond-mat.str-el",
"quant-ph"
] |
numbers,sort compress
Figures/
||
‖‖
|
http://arxiv.org/abs/2307.05811v2 | 20230711213321 | Twin-width of graphs on surfaces | [
"Daniel Kráľ",
"Kristýna Pekárková",
"Kenny Štorgel"
] | math.CO | [
"math.CO",
"cs.DM"
] |
Twin-width of graphs on surfacesThe first two authors have been supported by the MUNI Award in Science and Humanities (MUNI/I/1677/2018) of the Grant Agency of Masaryk University. The third author is supported by the Slovenian Research Agency (research program P1-0383, research projects J1-3002, J1-4008, and a Young Researchers Grant).
Daniel KráľFaculty of Informatics, Masaryk University, Botanická 68A, 602 00 Brno, Czech Republic. E-mails: [email protected] and [email protected]
Kristýna Pekárková^lthKenny ŠtorgelFaculty of Information Studies in Novo mesto, Ljubljanska cesta 31a, 8000 Novo mesto, Slovenia. E-mail: [email protected].
=============================================================================================================================================================================================================================================================================================================================================================
Twin-width is a width parameter introduced by Bonnet, Kim, Thomassé and Watrigant [FOCS'20, JACM'22],
which has many structural and algorithmic applications.
We prove that the twin-width of every graph embeddable in a surface of Euler genus g
is 18√(47g)+O(1), which is asymptotically best possible as
it asymptotically differs from the lower bound by a constant multiplicative factor.
Our proof also yields a quadratic time algorithm to find a corresponding contraction sequence.
To prove the upper bound on twin-width of graphs embeddable in surfaces,
we provide a stronger version of the Product Structure Theorem for graphs of Euler genus g that
asserts that every such graph is a subgraph of the strong product of a path and a graph with a tree-decomposition
with all bags of size at most eight with a single exceptional bag of size max{8,32g-27}.
§ INTRODUCTION
Twin-width is a graph parameter,
which has recently been introduced by Bonnet, Kim, Thomassé and Watrigant <cit.>.
It has quickly become one of the most intensively studied graph width parameters
due to its many connections to algorithmic and structural questions in both computer science and mathematics.
In particular,
classes of graphs with bounded twin-width (we refer to Section <ref> for the definition of the parameter)
include at the same time well-structured classes of sparse graphs and well-structured classes of dense graphs.
Particular examples are classes of graphs with bounded tree-width,
with bounded rank-width (or equivalently with bounded clique-width), and
classes excluding a fixed graph as a minor.
As the first order model checking is fixed parameter tractable for classes of graphs with bounded twin-width <cit.>,
the notion led to a unified view of various earlier results on fixed parameter tractability of first order model checking of graph properties <cit.>, and
more generally first order model checking properties of other combinatorial structures such as
matrices, permutations and posets <cit.>.
The foundation of the theory concerning twin-width has been laid by Bonnet, Kim, Thomassé and their collaborators
in a series of papers <cit.>,
also see <cit.>.
The amount of literature on twin-width is rapidly growing and
includes exploring algorithmic aspects of twin-width <cit.>,
combinatorial properties <cit.>, and
connections to logic and model theory <cit.>.
While it is known that many important graph classes have bounded twin-width,
good bounds are known only in a small number of specific cases.
One of the examples is the class of graphs of bounded tree-width
where an asymptotically optimal bound, exponential in tree-width, was proven by Jacob and Pilipczuk <cit.>.
Another example is the class of planar graphs.
The first explicit bound of 583 by Bonnet, Kwon and Wood <cit.>
was gradually improved in a series of papers <cit.>
culminating with a bound of 8 obtained by Hliněný and Jedelský <cit.>.
We remark that Lamaison and the first author <cit.> presented a construction of a planar graph with twin-width 7,
bringing the lower and upper bound within the difference of one.
In this paper,
we extend this list by providing an asymptotically optimal upper bound on the twin-width of graphs embeddable in surfaces of higher genera.
Specifically, we prove the following two results (the latter is used to prove the former):
* We show that the twin-width of a graph embeddable in a surface of Euler genus g is at most 18√(47g)+O(1),
which is asymptotically best possible;
our proof also yields a quadratic time algorithm to find a corresponding sequence of vertex contractions.
* We provide a strengthening of the Product Structure Theorem for graphs embeddable in a surface of Euler genus g
by showing that such graphs are subgraphs of a strong product of a path and a graph that almost has a bounded tree-width.
We next present the two results in more detail while also presenting the related existing results, and
discuss algorithmic aspects of our results.
§.§ Twin-width of graphs embeddable in surfaces
Graphs that can be embedded in surfaces of higher genera, such as the projective plane, the torus and the Klein bottle,
form important minor-closed classes of graphs with many applications and connections <cit.>.
While the general theory concerning minor-closed classes of graphs
yields that graphs embeddable in a fixed surface has bounded twin-width,
we are now aware of a bound subexponential in the genus of the surface.
In particular,
the following two upper bounds on the twin-width of graphs embeddable in surfaces
can be derived from existing results on the twin-width.
The general bound on twin-width of d-contractible graphs (graphs embeddable in a surface of Euler genus g
are O(g)-contractible <cit.> in the sense used in <cit.>)
implies that
the twin-width of graphs embeddable in a surface of Euler genus g is at most double exponential in g.
A better bound can be obtained using the Product Structure Theorem for graphs embeddable in surfaces,
which we present as Theorem <ref> below.
The Product Structure Theorem for graphs embeddable in surfaces from <cit.>
together with results on the twin-width of graphs with bounded tree-width <cit.> and
on the twin-width on the strong product of graphs <cit.>.
yield that
the twin-width of a graph embeddable in a surface of Euler genus g is at most 2^18g+O(1).
Our main result asserts that twin-width of every graph that can be embedded in a surface of Euler genus g
is at most 18√(47g)+O(1).
This bound is asymptotically optimal as any graph with √(6g)-O(1) vertices can be embedded in a surface of Euler genus g and
the n-vertex Erdős-Rényi random graph G_n,1/2 has twin-width at least n/2-O(√(nlog n)) <cit.>,
i.e., there exists a graph with twin-width √(3g/2)-o(√(g)) embeddable in a surface of Euler genus g (a short
proof is given for completeness in Section <ref>).
In particular, our upper bound asymptotically differs from the lower bound by a multiplicative factor 6√(282)≈ 100.76.
We remark that
we are aware of several parts of our argument
whose refinement would lead to a decrease of the multiplicative constant in the upper bound (with the resulting
multiplicate constant to be around 20).
However, we have decided not to do so due to the technical nature of such refinements and
the absence of additional structural insights gained by the refinements.
§.§ Product Structure Theorem
To prove our main result,
we prove a modification of the Product Structure Theorem that applies to graphs embeddable in surfaces.
The Product Structure Theorem is a recent significant structural result introduced by
Dujmović, Joret, Micek, Morin, Ueckerdt and Wood <cit.>,
which brought new substantial insights into the structure of planar graphs and
led to breakthroughs on several long standing open problems concerning planar graphs, see, e.g. <cit.>.
We also refer to the survey by Dvořák et al. <cit.> on the topic.
The statement of the Product Structure Theorem originally proven by Dujmović et al. <cit.>
reads as follows (we remark that the statement in <cit.> does not include the condition
on planarity of the graph of bounded tree-width, however, an easy inspection of the proof yields this).
Every planar graph is a subgraph of the strong product of a path and a planar graph with tree-width at most 8.
Ueckerdt et al. <cit.> improved the result as follows (in fact, we state a corollary of their main result
to avoid defining the notion of simple tree-width that is not needed in our further presentation).
Every planar graph is a subgraph of the strong product of a path and a planar graph with tree-width at most 6.
Dujmović et al. <cit.> also proved two extensions of the Product Structure Theorem
to graphs embeddable in surfaces.
Every graph embeddable in a surface of Euler genus g>0
is a subgraph of the strong product of a path, the complete graph K_2g and
a planar graph with tree-width at most 9.
Every graph embeddable in a surface of Euler genus g>0
is a subgraph of the strong product of a path, the complete graph K_max{2g,3} and
a planar graph with tree-width at most 4.
A stronger version was proven by Distel at el. <cit.>.
Every graph embeddable in a surface of Euler genus g>0
is a subgraph of the strong product of a path, the complete graph K_max{2g,3} and
a planar graph with tree-width at most 3.
We remark that it is not possible to replace K_2g in the statement of Theorems <ref>, <ref> and <ref>
with a complete graph of sublinear order as long as the bound on the tree-width stays constant
since the layered tree-width of graphs embeddable in a surface of Euler genus g is linear in g <cit.> (the definition
of layered tree-width is given in Section <ref>).
To prove our upper bound on the twin-width of graphs embeddable in surfaces,
we strengthen the statement of the Product Structure Theorem for graphs embeddable in surfaces as follows.
Theorems <ref>, <ref> and <ref> imply that
every graph embeddable in a surface of Euler genus g>0
is a subgraph of the strong product of a path and a graph with tree-width at most 20g-1, max{10g-1,14} and max{8g-1,11}, respectively.
The next theorem, which we prove in Section <ref>, asserts that
it is possible to assume that the tree-width of the graph in the product is almost at most 7
in the sense that all bags except possibly for a single bag have size at most 8.
Every graph embeddable in a surface of Euler genus g>0
is a subgraph of the strong product of a path and a graph H that
has a rooted tree-decomposition such that
* the root bag has size at most max{8,32g-27}, and
* every bag except the root bag has size at most 8.
We remark that,
similarly as in Theorem <ref> it is not possible to replace K_2g with a complete graph of smaller order,
it is necessary to permit at least one of the bags to have a size linear in g in Theorem <ref>
since the layered tree-width of graphs embeddable in a surface of Euler genus g is linear in g <cit.>.
Hence, the statement of Theorem <ref> is the best possible asymptotically.
It is interesting to note the proof of Theorem <ref> given in <cit.>
implies that every graph embeddable in a surface of Euler genus g>0
is a subgraph of the strong product of a path and a graph that
can be obtained from a planar graph with tree-width at most 3
by replacing one vertex with K_2g and the remaining vertices with K_3 (and replacing each edge of the planar graph
with a complete bipartite graph).
However, the vertex replaced with K_2g can be contained in many bags of the tree-decomposition of the planar graph and
so the proof given in <cit.> does not yield Theorem <ref>.
A similar statement also holds for the proofs of Theorems <ref> and <ref> given in <cit.>.
The main new component in the proof of Theorem <ref> (compared to the proofs given in <cit.>)
is Lemma <ref> given in Section <ref>,
which is crucial so that we are able to restrict the sizes of all but one bag in a tree-decomposition to a constant size.
We also note the following corollary of Theorem <ref> for projective planar graphs.
Every graph embeddable in the projective plane
is a subgraph of the strong product of a path and a graph with tree-width at most 7.
§.§ Algorithmic aspects
We decided to present our results in purely structural way,
i.e., focus on establishing the bounds without giving an algorithm in parallel.
However, since all the proofs that we present are algorithmic,
we also obtain a quadratic time algorithm (when the genus g>0 is fixed) that
given a graph G embeddable in a surface of genus g,
constructs a sequence of contractions witnessing that the twin-width of G is at most 18√(47g)+O(1),
i.e., the red degree of trigraphs obtained during contractions does not exceed the bound given in Theorem <ref>.
We remark that we measure the time complexity of the algorithm in terms of the number of vertices in G;
note that the number of edges of an n-vertex graph embeddable in a surface of genus g is at most 3n+3g-6g.
We now discuss the algorithm in more detail following the steps in the proof of Theorem <ref>.
Since it is possible to find an embedding of a graph in a fixed surface in linear time <cit.>,
we can assume that the input graph G is given together with its embedding in the surface (recall that
for every g≥ 2, there are only two non-homeomorphic surfaces of Euler genus g).
When the embedding of G in the surface is fixed,
we complete it to a triangulation G' (we permit adding parallel edges if needed).
We next choose an arbitrary BFS spanning tree T of G' and
identify g edges a_1b_1,…,a_gb_g as described in Lemma <ref>, which was proven in <cit.>.
The proof of Lemma <ref> in <cit.>
proceeds by constructing a spanning tree in the dual graph that avoids the edges of T and
choosing the edges contained in neither T nor the spanning tree of the dual graph as
the edges a_1b_1,…,a_gb_g;
this can be implemented in linear time.
When the edges a_1b_1,…,a_gb_g are fixed,
the construction of the walk W and the vertical paths described in Lemma <ref> requires linear time.
We next need to identify the vertical paths described in Lemma <ref> that
split the near-triangulation bounded by W obtained from G' into parts,
each bounded by at most six vertical paths.
This may require processing the near-triangulation repeatedly following the steps in the inductive proof of Lemma <ref>,
however, each step can be implemented in linear time and the number of steps is also at most linear.
Finally, we apply the recursive procedure described in Lemma <ref>
to each of the parts delimited by faces of the 2-connected graph obtained in Lemma <ref>;
again, the number of steps in the recursive procedure is linear and each can be implemented in linear time, and
they directly yield the collection of vertical paths and
the tree-decomposition of G'/ described in Theorem <ref>.
Since the paths and the tree-decomposition fully determine the order of the contraction of the vertices and
the order can be easily determined in linear time following the proof of Theorem <ref>,
we conclude that there is a quadratic time algorithm that constructs a sequence of contractions such that
the red degree of trigraphs obtained during contractions does not exceed the bound given in Theorem <ref>.
We would like to remark that we have not attempted to optimize the running time of the algorithm,
which would particularly require to refine the recursive steps in the proofs of Lemmas <ref> and <ref>.
§ PRELIMINARIES
In this section, we introduce notation used throughout the paper.
We use [n] to denote the set of the first n positive integers, i.e., {1,…,n}.
All graphs considered in this paper are simple and have no parallel edges unless stated otherwise;
if G is a graph, we use V(G) to denote the vertex set of G.
A triangulation of the plane or a surface of Euler genus g>0
is a graph embedded in such a surface such that every face is a 2-cell, i.e., homeomorphic to a disk, and
bounded by a triangle.
A near-triangulation is a 2-connected graph G embedded in the plane such that
each inner face of G is bounded by a triangle.
We now give a formal definition of twin-width.
A trigraph is a graph with some of its edges being red, and
the red degree of a vertex v is the number of red edges incident with v.
If G is a trigraph and v and v' form a pair of its (not necessarily adjacent) vertices,
then the trigraph obtained from G by contracting the vertices v and v'
is the trigraph obtained from G by removing the vertices v and v' and introducing a new vertex w such that
w is adjacent to every vertex u that is adjacent to at least one of the vertices v and v' in G and
the edge wu is red if u is not adjacent to both v and v' or at least one of the edges vu and v'u is red,
i.e., the edge wu is not red only if G contains both edges vu and v'u and neither of the two edges is red.
The twin-width of a graph G is the smallest integer k such that
there exists a sequence of contractions that reduces the graph G,
i.e., the trigraph with the same vertices and edges as G and no red edges, to a single vertex and
none of the intermediate graphs contains a vertex of red degree more than k.
A rooted tree-decomposition of a graph G is a rooted tree such that
each vertex of is a subset of V(G), which we refer to as a bag, and that
satisfies the following:
* for every vertex v of G, there exists a bag containing v,
* for every vertex v of G, the bags containing v form a connected subgraph (subtree) of , and
* for every edge e of G, there exists a bag containing both end vertices of e.
If the choice of the root is not important,
we just speak about a tree-decomposition of a graph G.
The width of a tree-decomposition is the maximum size of a bag of decreased by one, and
the tree-width of a graph G is the minimum width of a tree-decomposition of G.
A k-tree is defined recursively as follows:
the complete graph K_k is a k-tree and
if G is a k-tree,
then any graph obtained from G by introducing a new vertex and making it adjacent to any k vertices of G that
form a complete subgraph in G is also a k-tree.
Note that a graph G is a 1-tree if and only if G is a tree.
More generally, a graph G has tree-width at most k if and only if G is a subgraph of a k-tree, and
if G has at least k vertices, then G is actually a spanning subgraph of a k-tree.
Note that k-trees have a tree-like structure given by their recursive definition,
which also gives a rooted tree-decomposition of G with width k:
the rooted tree-decomposition of K_k consists of a single bag containing all k vertices, and
the rooted tree-decomposition of the graph obtained from a k-tree G by introducing a vertex w
can be obtained from the rooted tree-decomposition _G of G by introducing a new bag containing w and its k neighbors and
making this bag adjacent to the bag of _G that contains all k neighbors of w (such a bag exists since
the subtrees of a tree have the Helly property).
A BFS spanning tree T of a (connected) graph G is a rooted spanning tree such that
the path from the root to any vertex v in T is the shortest path from the root to v in G;
in particular, a BFS spanning tree can be obtained by the breadth-first search (BFS) of a graph.
A layering is a partition of a vertex set of a graph G into sets V_1,…,V_k,
which are called layers, such that every edge of G connects two vertices of the same or adjacent layers,
i.e., layers whose indices differ by one.
If T is a BFS spanning tree of G,
then the partition of the vertex set V(G) into sets based on the distance from the root of T,
i.e., the first set contains the root,
the second set contains all neighbors of the root,
the third set contains all vertices at distance two from the root, etc.,
is a layering.
A BFS spanning forest F is a rooted spanning forest of G,
i.e., a forest consisting of rooted trees, such that
there exists a layering V_1,…,V_k of G compatible with F,
i.e., for every tree of F,
there exists d such that the vertices at distance ℓ from the root are all contained in V_d+ℓ.
Note that if G is a graph and T a BFS spanning tree of G,
then removing the same vertices in G and T, which naturally yields a rooted forest,
results in a graph G' and a BFS spanning forest of G'.
Finally, the layered tree-width of a graph G is the minimum k for which
there exists a tree-decomposition of G and a layering such that
every bag of contains at most k vertices from the same layer.
Consider a graph G and a BFS spanning tree T of G.
A vertical path is a path contained in T with no two vertices from the same layer,
i.e., a subpath of a path from a leaf to the root of T.
The top vertex of a vertical path is its vertex closest to the root and
the bottom vertex is its vertex farthest from the root.
We define vertical paths with respect to a BFS spanning forest analogously.
If is a partition of the vertex set of G to vertical paths,
the graph G/ is the graph obtained by contracting each of the paths contained in to a single vertex,
i.e., the vertices of G/ are the vertical paths contained in and
two vertical paths P and P' are adjacent if there is an edge between V(P) and V(P'),
i.e., there is a vertex of P adjacent to a vertex of P'.
§ PRODUCT STRUCTURE THEOREM FOR GRAPHS ON SURFACES
In this section, we provide the version of the Product Structure Theorem for graphs on surfaces,
which we need to prove our upper bound on the twin-width of such graphs.
We start with recalling the following lemma proven by Dujmović et al. <cit.>.
Let G be a triangulation of a surface of Euler genus g>0 and let T be a BFS spanning tree of G.
There exist edges a_1b_1,…,a_gb_g not contained in the tree T with the following property.
Let F_0 be the subset of edges of G comprised of the g edges a_1b_1,…,a_gb_g and
2g paths from the root of T to the vertices a_1,…,a_g and b_1,…,b_g.
The closed walk along the edges contained in F_0 bounds a part of the surfaces homeomorphic to a disk.
Using Lemma <ref>, we prove the following.
Let G be a triangulation of a surface of Euler genus g>0 and let T be a BFS spanning tree of G.
There exist a closed walk W in G,
a subtree T_0 of T that contains the root of T, and
k vertex-disjoint vertical paths P_1,…,P_k, k≤ 2g, such that
* the closed walk W bounds a part of the surfaces homeomorphic to a disk,
* the sets V(P_1),…,V(P_k) form a partition of V(T_0), and
* the closed walk W can be vertex-partitioned into at most 6g-1 paths, each subpath of one of the paths P_1,…,P_k.
Fix a triangulation G and a BFS spanning tree T.
Apply Lemma <ref> to get edges a_1b_1,…,a_gb_g.
The tree T_0 is the subtree of T formed
by the paths from the root to the vertices a_1,…,a_g and b_1,…,b_g, and
the walk W is the unique closed walk along the tree T_0 with the edges a_1b_1,…,a_gb_g added.
We refer to Figure <ref> for illustration in the case of the torus.
We next define paths P_1,…,P_k.
The embedding of the tree T_0 in the surface induces a natural cyclic ordering at each vertex (regardless
whether the surface is orientable or not).
Hence, it is possible to order the leaves of the tree T_0 from left to right and
let v_1,…,v_k be the leaves of T_0 listed in this order.
Note that k≤ 2g since each of the leaves is one of the vertices a_1,…,a_g and b_1,…,b_g.
The path P_1 is the path contained in T_0 from the leaf v_1 to the root of the tree T_0, and
the path P_i, i=2,…,k,
is the path contained in T_0 from the leaf v_k to the child of a vertex in V(P_1)∪⋯∪ V(P_i-1).
Note that some of the paths P_2,…,P_k may consist of the leaf vertex only.
We refer to Figure <ref> for illustration.
It remains to prove the last claim of the statement of the lemma.
We show that the closed walk W can be vertex-partitioned into 2g+2k-1 paths,
each subpath of one of the paths P_1,…,P_k.
Observe that the closed walk W can be split into the following 2g parts,
each between a consecutive pair of vertices a_1,…,a_g and b_1,…,b_g
when walking around the tree T_0 consistently with cyclic order at the vertices.
Figure <ref> gives an illustration for the case g=2 and k=4,
i.e., all four vertices a_1, a_2, b_1 and b_2 are the leaves of T_0.
We first analyze the case k=2g, i.e., all paths go between vertices of the tree T_0.
Let us think of the tree T_0 as obtained by adding paths P_1,…,P_k sequentially one after another.
At the beginning, the closed walk around the path P_1 can be covered by two vertical paths:
one following P_1 from v_1 to the root on the left and the other on the right.
When the path P_i is added, we may think of it as adding two new paths following it on the left and on the right and
splitting the path containing the parent of the top vertex of P_i into two pieces that
overlap at the parent of the top vertex of the path P_i (see Figure <ref>).
Hence, three vertical paths are added to the collection at each step and
so the number of paths needed to cover W is 3k-1.
Suppose now that some of the vertices a_1,…,a_g and b_1,…,b_g are not among the leaves of T_0 or
some of the leaves of T_0 correspond to multiple vertices.
Observe that there are exactly 2g-k such vertices (when counting multiplicities) and
each of them splits one of the created paths into two.
Hence, the total number of paths is 3k-1+(2g-k)=2g+2k-1,
which implies the bound in the statement of the lemma as k≤ 2g.
Lemma <ref> forms one of the two key ingredients for the proof of Theorem <ref>.
The second relates to partitioning disk regions bounded by vertical paths.
Similarly to <cit.>,
we make use Sperner's Lemma, see e.g. <cit.>.
Let G be a near-triangulation.
Suppose that the vertices of G are colored with three colors in such a way that
the vertices of each of the three colors on the outer face are consecutive,
i.e., they form a path.
There exists an inner face that contains one vertex of each of the three colors.
The following lemma follows the lines of the proof of <cit.>;
we include a proof for completeness.
Let G be a near-triangulation and
let T be a BFS rooted spanning forest such that all roots of T are on the outer face.
If the boundary cycle of the outer face can be partitioned into at most 6 vertical paths,
say P_1,…,P_k, k≤ 6,
then there exists a collection of vertex-disjoint vertical paths such that
* the collection contains the paths P_1,…,P_k,
* every vertex of G is contained in one of the paths in , and
* G/ has a rooted tree-decomposition of width at most seven such that
the root bag contains the vertices corresponding to the paths P_1,…,P_k.
We proceed by induction on the number of inner vertices of G.
The base case is when G has no inner vertices.
In this case, we simply set to be the set {P_1,…,P_k}.
We next present the induction step.
Fix a near-triangulation G and the paths P_1,…,P_k as described in the statement of the lemma.
If k≤ 5, we proceed as follows.
Color the vertices of the path P_1 and those reachable from this path through T red,
the vertices of the path P_2 and those reachable from this path through T green,
the vertices of the paths P_3,…,P_k and those reachable from these k-2 paths through T blue.
By Lemma <ref>, there exists an inner face with a red vertex, a green vertex and a blue vertex.
Let A, B and C be the vertical paths from the path P_1, the path P_2 and a path P_i, i∈{3,…,k} to these three vertices.
The case k=5 is illustrated in Figure <ref>.
We now apply induction to the inner triangulation delimited by parts of the paths P_1 and P_2 and the paths A and B,
the inner triangulation delimited by parts of the paths P_2 and P_i and the paths P_3,…,P_i-1 and B and C, and
the inner triangulation delimited by parts of the paths P_1 and P_i and the paths P_i+1,…,P_5 and A and C.
The collection is the union of the three collections produced by induction
with parts of paths P_1, P_2 and P_i replaced by the whole paths P_1, P_2 and P_i.
The rooted tree-decomposition of G/ is obtained by creating a new root bag containing the k+3≤ 8 paths P_1,…,P_k and A,B,C, and
making the root bags of the three tree-decompositions obtained by induction to be adjacent to this root bag.
We now deal with the case k=6.
Color the vertices of the paths P_1 and P_2 and those reachable from these two paths through T red,
the vertices of the paths P_3 and P_4 and those reachable from these two paths through T green,
the vertices of the paths P_5 and P_6 and those reachable from these two paths through T blue.
Let A, B and C be the vertical paths from the paths P_1,…,P_k to these three vertices, and
let a, b and c be the indices such that
the path A starts at a vertex adjacent to the path P_a,
B starts at a vertex adjacent to the path P_b, and
C starts at a vertex adjacent to the path P_c.
Note that at least one of the following two cases holds: b-a≥ 2 or c-b≥ 2.
By symmetry, we assume that b-a≥ 2 in the rest.
The situation is also illustrated in Figure <ref>.
We now apply induction to the inner triangulation G_ab delimited by parts of the paths P_a and P_b, the paths P_a+1,…,P_b-1 and the paths A and B,
the inner triangulation G_bc delimited by parts of the paths P_b and P_c, the paths P_b+1,…,P_c-1 and the paths B and C, and
the inner triangulation G_ca delimited by parts of the paths P_c and P_a, the paths P_c+1,…,P_a-1 and the paths C and A.
The collection is the union of the three collections produced by induction
with parts of paths P_a, P_b and P_c replaced by the whole paths P_a, P_b and P_c.
The sought rooted tree-decomposition of G/ is obtained as follows.
Create a root bag containing the eight paths P_1,…,P_k and A and B.
The root will have two children.
One of the children is the root bag of the rooted tree-decomposition of G_ab together with the whole rooted tree-decomposition G_ab obtained by induction;
note that this bag contains the paths P_a,…,P_b and A and B.
The other child is a new bag containing the (at most eight) paths P_b,…,P_a and A, B and C;
this bag will have two children.
One of them is the root bag of the rooted tree-decomposition of G_bc together with the whole rooted tree-decomposition of G_bc obtained by induction, and
the other is the root bag of the rooted tree-decomposition of G_ca together with the whole rooted tree-decomposition of G_ca obtained by induction.
As the obtained rooted tree-decomposition of G/ has width at most seven and
has the properties given in the statement of the lemma,
the proof of the lemma is finished.
The next lemma is the last ingredient needed to prove the main result of this section,
which is Theorem <ref>.
Let G be a near-triangulation and
let T be a BFS rooted spanning forest such that all roots of T are on the outer face.
If the boundary cycle of the outer face can be partitioned into k≥ 6 vertical paths,
then there exist a 2-connected subgraph G' of G and a collection of vertex-disjoint vertical paths such that
* contains all vertical paths bounding the outer face,
* contains at most max{6,6k-32} paths,
* the vertex set of G' is the union of the vertex sets of the paths contained in ,
* the graph G' contains the boundary of the outer face,
* the graph G' has at most max{1,3k-18} internal faces, and
* each of the internal faces of G' is bounded by at most six paths contained in .
The proof proceeds by induction on k.
The k vertical paths bounding the outer face are denoted by P_1,…,P_k.
We distinguish four cases depending on the value of k: k=6, k=7, k=8 and k≥ 9.
If k=6, we just set G' to be the cycle bounding the outer face and ={P_1,…,P_6}.
We next analyze the case k=7.
Color the vertices of the paths P_1, P_2 and P_3 and those reachable from these three paths through T red,
the vertices of the paths P_4 and P_5 and those reachable from these two paths through T green, and
the remaining vertices,
i.e., the vertices of the paths P_6 and P_7 and those reachable from these two paths through T, blue.
By Lemma <ref>, there exists an inner face with a red vertex, a green vertex and a blue vertex.
Let A, B and C be the vertical paths from the paths P_1,…,P_k to these three vertices, and
let a, b and c be the indices such that
the path A starts at a vertex adjacent to the path P_a,
B starts at a vertex adjacent to the path P_b, and
C starts at a vertex adjacent to the path P_c.
The subgraph G' formed by the vertices of the paths P_1,…,P_7 and the paths A,B,C, and
the set ={P_1,…,P_7,A,B,C} (note that ||=10) satisfies the statement of the lemma
unless (a,b)=(1,5) or (a,c)=(3,6) (see Figure <ref>).
As the two cases are symmetric, we analyze the former only.
Color the vertices of the paths P_2,…,P_4 and those reachable from these three paths through T yellow,
the vertices of the path P_1 and those reachable from this path through T magenta, and
the vertices of the path P_5 and those reachable from this path through T cyan.
Note that all vertices of A are colored with magenta and all vertices of B with cyan.
We apply Lemma <ref> to the subgraph of G bounded by the paths P_1,…,P_5 and
the paths A and B.
This subgraph contains an inner face with a magenta, cyan and yellow vertex, and
let A', B' and C' be the vertical paths leading from a vertex of V(P_1),
a vertex V(P_5) and a vertex of V(P_2)∪ V(P_3)∪ V(P_4) to these three vertices;
see Figure <ref>.
Since the subgraph G' formed by the vertices of the paths P_1,…,P_7 and the paths A',B',C', and
the set ={P_1,…,P_7,A',B',C'} (note that ||=10) satisfies the statement of the lemma,
the analysis of the case k=7 is finished.
We next analyze the case k=8.
Color the vertices of the paths P_1,…,P_3 and those reachable from these paths through T red,
the vertices of the paths P_4,…,P_6 and those reachable from these paths through T green, and
the remaining vertices,
i.e., the vertices of the paths P_7 and P_8 and those reachable from these paths through T, blue.
By Lemma <ref>, there exists an inner face with a red vertex, a green vertex and a blue vertex.
Let A, B and C be the vertical paths from the paths P_1,…,P_k to these three vertices, and
let a, b and c be the indices such that
the path A starts at a vertex adjacent to the path P_a,
B starts at a vertex adjacent to the path P_b, and
C starts at a vertex adjacent to the path P_c.
See Figure <ref>.
Unless a=1 and b=6, we proceed as follows.
If each of the three near-triangulations delimited by the paths P_1,…,P_8 and the paths A, B and C
is bounded by at most six of these paths,
we set G' to be the subgraph formed by the vertices of the paths P_1,…,P_8 and the paths A,B,C, and
=P_1,…,P_8,A,B,C.
Since ||=11<14 and G' has three internal faces, the statement of the lemma holds.
Otherwise, exactly one of the three near-triangulations delimited by the paths P_1,…,P_8 and the paths A, B and C
is bounded by seven of these paths and the remaining two by at most six of these paths.
We apply induction to the near-triangulation delimited by seven of the paths P_1,…,P_8 and the paths A, B and C, and
we set G' to be the subgraph formed by the vertices of the paths P_1,…,P_8, the paths A,B,C and the subgraph obtained by induction, and
the set to contain the paths P_1,…,P_8, the paths A, B and C and the paths obtained by induction.
Note that G' has at most five internal faces and the set contains at most 11+3=14 paths.
We next assume that a=1 and b=6.
Color the vertices of the paths P_2,…,P_4 and those reachable from these three paths through T yellow,
the vertices of the path P_1 and those reachable from this path through T magenta, and
the vertices of the path P_6 and those reachable from this path through T cyan.
Note that all vertices of A are colored with magenta and all vertices of B with cyan.
We apply Lemma <ref> to the subgraph of G bounded by the paths P_1,…,P_5 and
the paths A and B (see Figure <ref>).
This subgraph contains an inner face with a magenta, cyan and yellow vertex, and
let A', B' and C' be the vertical paths leading from a vertex of V(P_1),
a vertex V(P_6) and a vertex of V(P_2)∪⋯∪ V(P_5) to these three vertices.
If the path C' leads to a vertex of V(P_3)∪ V(P_4),
then we set G' to be the subgraph formed by the vertices of the paths P_1,…,P_8 and the paths A', B' and C' and
the set to be the set {P_1,…,P_8,A',B',C'}.
Since G' has three internal faces and contains 11 paths,
the statement of the lemma follows.
If the path C' leads to a vertex of V(P_2)∪ V(P_5), we may assume that it leads to a vertex of V(P_5) by symmetry.
We apply induction to the near-triangulation delimited by the paths P_1,…,P_5 and the paths A' and C', and
we set G' to be the subgraph formed by the vertices of the paths P_1,…,P_8, the paths A',B',C' and the subgraph obtained by induction, and
the set to contain the paths P_1,…,P_8, the paths A', B' and C' and
the (at most three) additional paths obtained by induction.
Since G' has at most five internal faces and contains at most 11+3=14 paths,
the statement of the lemma follows.
We now analyze the general case k≥ 9.
Color the vertices of the paths P_1,…,P_3 and those reachable from these paths through T red,
the vertices of the paths P_4,…,P_6 and those reachable from these paths through T green, and
the remaining vertices,
i.e., the vertices of the paths P_7,…,P_k and those reachable from these paths through T, blue.
By Lemma <ref>, there exists an inner face with a red vertex, a green vertex and a blue vertex.
Let A, B and C be the vertical paths from the paths P_1,…,P_k to these three vertices, and
let a, b and c be the indices such that
the path A starts at a vertex adjacent to the path P_a,
B starts at a vertex adjacent to the path P_b, and
C starts at a vertex adjacent to the path P_c.
See Figure <ref>.
Let ℓ_ab=b-a, ℓ_bc=c-b and ℓ_ca=k+a-c.
Since ℓ_ab+ℓ_bc+ℓ_ca=k≥ 9,
one of ℓ_ab, ℓ_bc and ℓ_ca is at least four
unless ℓ_ab=ℓ_bc=ℓ_ca=3 and so k=9.
If ℓ_ab=ℓ_bc=ℓ_ca=3,
we set G' to be the subgraph formed by the vertices of the paths P_1,…,P_9 and the paths A,B,C, and
=P_1,…,P_9,A,B,C.
Since ||=12 and G' has three internal faces, each bounded by six of the paths contained in ,
the statement of the lemma holds.
In the rest of the proof,
we assume that at least one of ℓ_ab, ℓ_bc and ℓ_ca is at least four.
If ℓ_ab≥ 4,
we apply induction to the near-triangulation bounded by the paths P_a,…,P_b and the paths A and B, and
to the near-triangulation bounded by paths P_b,…,P_k,P_1,…,P_a and the paths A and B.
If ℓ_ab<4 but ℓ_bc≥ 4,
we apply induction to the near-triangulation bounded by the paths P_b,…,P_c and the paths B and C, and
to the near-triangulation bounded by paths P_c,…,P_k,P_1,…,P_b and the paths B and C.
Otherwise,
we apply induction to the near-triangulation bounded by the paths P_a,…,P_c and the paths A and C, and
to the near-triangulation bounded by paths P_c,…,P_k,P_1,…,P_a and the paths B and C.
In each of these three cases,
we set the sought graph G' to be the union of the two graphs obtained by induction and
to be the union of the obtained sets of vertical paths.
It remains to estimate the size of and the number of faces of G'.
Let k_1 be the number of paths bounding the former of the two near-triangulations to that the induction has been applied, and
let k_2 be the number of paths bounding the latter.
Observe that k_1+k_2=k+6 and both k_1 and k_2 are at least seven.
By induction,
the set contains at most k+2+(5k_1-32)+(5k_2-32)=6k+2+30-2· 32=6k-32 vertical paths and
the number of internal faces of G' is at most (3k_1-18)+(3k_2-18)=3k+18-36=3k-18.
The proof of the lemma is now completed.
We are now ready to prove the main result of this section, which implies Theorem <ref>.
Let G be a triangulation of a surface of Euler genus g>0 and let T be a BFS spanning tree of G.
The tree T can be vertex-partitioned to a collection of vertex-disjoint vertical paths such that
the graph G/ has a rooted tree-decomposition with the following properties:
* the root bag has size at most max{8,32g-27},
* the root bag has at most 6·max{1,18g-21} children, and
* every bag except the root bag has size at most 8.
Moreover, every subtree ' of the tree-decomposition rooted at a child of the root satisfies the following:
* the bags of ' contain at most six paths that are contained in the root bag, and
* if P_1,…,P_k are all paths that are contained in the bags of ' but not in the root bag,
the subgraph induced by V(P_1)∪⋯∪ V(P_k)
has a component joined by an edge to each of the paths contained both in the root bag and in '.
Fix a triangulation G of a surface of Euler genus g>0 and a BFS spanning tree T of G.
We apply Lemma <ref> to obtain a closed walk W, a subtree T_0 of T and
k vertex-disjoint vertical paths P_1,…,P_k, k≤ 2g, with the properties given in Lemma <ref>.
Let ℓ be the number paths, which are subpaths of P_1,…,P_k, that cover the walk W;
note that ℓ≤ 6g-1 by Lemma <ref>.
We first deal with the general case ℓ≥ 7 (note that if ℓ≥ 7, then g≥ 2).
We apply Lemma <ref> to the near-triangulation bounded by the closed walk W and
the ℓ vertical paths, which are subpaths of P_1,…,P_k, that cover the walk W.
We obtain a collection _0 of vertex-disjoint vertical paths that contains at most 5ℓ-32 additional vertical paths and
a 2-connected subgraph G' of G such that each internal face of G' is bounded by at most six paths contained in _0.
In addition, the number of faces of G', which we denote f further, is at most 3ℓ-18.
Since ℓ≤ 6g-1, we obtain that _0 contains at most 30g-27 additional vertical paths and
the number f of faces of G' is at most 18g-21.
We now replace in the collection _0 the subpaths of P_1,…,P_k that cover the closed walk W
with the paths P_1,…,P_k.
Hence, the size of the collection _0 is at most 32g-27 (note that k≤ 2g) and
each of the faces of G' is still bounded by at most six subpaths of the vertical paths contained in ,
(however possibly by multiple subpaths of the same vertical path).
We will soon proceed jointly for the cases ℓ≥ 7 and ℓ≤ 6.
To be able to do so, in the case ℓ≤ 6 (and so k≤ 6),
we set _0 to to be the collection {P_1,…,P_k} and
G' the graph consisting of a single cycle corresponding to the closed walk W;
note that the only face of G' bounds a near-triangulation in G and f=1.
We now proceed jointly for both cases.
If there is a face of G' such that the subgraph induced by the vertices contained in the face
does not have a component joined by an edge to each of the (at most six) paths from _0 bounding the face,
some the vertices on the boundary of the face must be joined by a chord separating the corresponding parts of the interior of the face.
By repeatedly adding such separating chords, we obtain a 2-connected graph G” with f'≤ 6f non-empty internal faces,
each bounded by at most six paths contained in _0, and such that
the subgraph induced by vertices contained in the face contains a component joined
by an edge to each of the (at most six) paths from _0 bounding the face.
We now apply Lemma <ref> to each of the f' near-triangulations bounded by the non-empty faces of G” and
obtain rooted tree-decompositions _1,…,_f' with width at most seven of each them.
We add to the collection _0 all additional vertical paths obtained by these f' applications of Lemma <ref> and
let be the resulting collection of vertical paths.
We now construct a rooted tree-decomposition of G/ with the properties given in the statement of the theorem.
We introduce a new root bag containing formed by the paths contained in _0 and
make the roots of the rooted tree-decompositions _1,…,_f' its children.
The root bag of the resulting tree-decomposition has size |_0|≤max{6,32g-27},
it has f'≤ 6f≤ 6max{1,18g-21} children, and all bags except the root has size at most 8.
Consider now a subtree ' rooted at a child of the root.
The only paths from _0 contained in the bags of ' are those bounding the face of G” that corresponds to ',
i.e., there are at most six such paths, and
the vertices contained in the paths of the bags of ' but not in the paths of _0
are exactly vertices contained inside the corresponding face and
so the subgraph of G induced by them has a component joined by an edge to each of the paths from _0 bounding the face.
We conclude that the obtained rooted tree-decompositions has the properties given in the statement of the theorem.
§ UPPER BOUND
We now present the asymptotically optimal upper bound on the twin-width of graphs embeddable in surfaces.
The twin-width of every graph G of Euler genus g is at most
6 · max{3√(47g)+1,2^21}=18√(47g)+O(1).
Fix a graph G of Euler genus g>0 and
let G_0 be any triangulation of the surface with Euler genus g that G can be embedded to such that V(G_0)=V(G) (to avoid
unnecessary technical issues related to adding new vertices, we permit G_0 to contain parallel edges).
We apply Theorem <ref> to the triangulation G_0 and an arbitrary BFS spanning tree T_0:
let be a collection of vertical paths and a rooted tree-decomposition with the properties given in the theorem.
Let P_1,…,P_k be the vertical paths contained in the root bag (note that k≤ 32g) and
let _1,…,_ℓ be the subtrees rooted at the children of the root bag (note that ℓ≤ 108g).
Further, let V_i, i∈ [ℓ], be the vertices contained in the vertical paths in the bags of the subtree _i that
are not contained in the root bag, i.e., that are not contained in V(P_1)∪⋯∪ V(P_k).
Note that for every i=1,…,ℓ, the subgraph induced by a set V_i contains a component that
is joined by an edge to each of the paths P_1,…,P_k contained in the subtree _i.
Let H_0 be the graph obtained from G_0 by contracting each of the following k+ℓ sets to a single vertex:
V(P_1),…,V(P_k) and V_1,…,V_ℓ.
Let a_1,…,a_k and b_1,…,b_ℓ be the resulting vertices.
We observe that H_0 can be obtained from G_0 by contracting edges and deleting vertices.
The vertices a_1,…,a_k are obtained by contracting paths P_1,…,P_k and
we may think of each vertex b_i, i=1,…,ℓ, to be obtained as follows:
first contract the component of the subgraph induced by the set V_i that
is joined by an edge to each of the paths P_1,…,P_k contained in the subtree _i to a single vertex, and
then delete all the vertices of V_i not contained in this component.
Since H_0 can be obtained from G_0 by contracting edges and deleting vertices,
the graph H_0 can be embedded in the same surface as G_0.
Hence, the number of the edges of H_0 is at most 3(k+ℓ)-6+3g≤ 3(k+ℓ+g) (the latter bound applies even if k+ℓ=2).
Also note that each of the vertices b_1,…,b_ℓ has degree at most six and
all its (at most six) neighbors are among the vertices a_1,…,a_k.
Let s=3√(47g); note that s≥ 6.
We next split the vertices a_1,…,a_k into sets A_1,…,A_k' and
the vertices b_1,…,b_ℓ into sets B_1,…,B_ℓ' as follows.
Keep adding the vertices b_1,…,b_k to the set B_1 as long as the sum of their degrees does not exceed s,
then keep adding the remaining vertices among b_1,…,b_k to the set B_2 as long as the sum of their degrees does not exceed s, etc.
Observe that the sum of the degrees of the vertices in each of the sets B_1,…,B_ℓ' is at most s+6≤ 2s and
the sum of the degrees of the vertices in each of the sets B_1,…,B_ℓ'-1 is at least s.
Each of the vertices a_1,…,a_k with degree larger than s forms a set of size one, and
the remaining vertices are split in the same way as the vertices b_1,…,b_k.
Each of the sets A_1,…,A_k' has either size one or the sum of the degrees of its vertices is at most 2s, and
the sum of the degrees of the vertices in each of the sets A_1,…,A_k'-1 is at least s.
Let H'_0 be the graph obtained from H_0 by contracting the vertices
in each of the sets A_1,…,A_k' and each of the sets B_1,…,B_ℓ' to a single vertex;
note that the graph H'_0 does not need to be embeddable in the same surface as H_0.
Since the sum of the degrees of the vertices a_1,…,a_k and b_1,…,b_ℓ
is at most 6(k+ℓ+g)≤ 846g, we obtain that k'+ℓ'≤846g/s+2=2s+2,
i.e., H'_0 has at most 2s+2 vertices.
We now describe the order in which we contract the vertices of G, and
we analyze the described order later.
In what follows, when we say a layer,
we always refer to the layers given by the BFS spanning tree T_0 from the application of Theorem <ref>.
In particular, each vertex of G is adjacent only to the vertices in its own layer and the two neighboring layers.
To make the presentation of the order of contractions clearer,
we split the contractions into three phases.
Phase I:
This is the most complex phase and consists of ℓ subphases.
In the i-th subphase, i∈ [ℓ],
we contract all the vertices of the set V_i that are contained in the same layer to a single vertex
in the way that we now describe, and
we then possibly contract them to some of the vertices created in the preceding subphases.
In this phase, we never contract two vertices contained in different layers together and
no contraction involves any of the vertices from V(P_1)∪⋯∪ V(P_k).
Fix i∈ [ℓ].
Let G_i be the subgraph of G/ induced by the vertices contained in the bags of the subtree _i and
let n' be the number of the paths P_1,…,P_k that are contained in the bags of the subtree _i;
note that n'≤ 6.
If the graph G_i has less than 8 vertices, we proceed directly to the conclusion of the subphase,
which can be found below and which starts with contracting all vertices of V_i in each of the layers to a single vertex.
In the rest, we assume that G_i has at least 8 vertices and
so the graph G_i is a subgraph of a 7-tree G'_i with V(G'_i)=V(G_i).
It is also possible to assume that the n' vertices corresponding to the paths from the set {P_1,…,P_k}
are all contained in the initial complete graph, and
we assume this to be the case.
Let Q_1,…,Q_n be any order of the vertical paths corresponding to the vertices of G'_i such that
the neighbors of Q_j, j∈ [n], among Q_1,…,Q_j-1 form a complete graph of order at most 7 in G'_i and
the n' paths from the set {P_1,…,P_k} are the paths Q_1,…,Q_n'.
Let C_j be the complete subgraph of G_i formed by Q_j and its (at most 7) neighbors among Q_1,…,Q_j-1.
Note that the neighbors of each vertex of a path Q_j, j∈ [n], in G
are contained in at most seven of the paths Q_1,…,Q_j-1 (which are those forming the complete graph C_j), and
thus such a vertex has at most 21 neighbors on the paths Q_1,…,Q_j-1 (as they must be in the same or adjacent layers).
We define the j-shadow of a vertex v∈ V_i
to be the set of its neighbors contained in the paths Q_1,…,Q_j-1;
note that the j-shadow of a vertex contained in the path Q_j has at most 21 vertices.
We now use the tree-like structure of the 7-tree G'_i to define the order of contractions of the vertices contained in V_i;
this part of our argument is analogous to that used in <cit.>
to obtain an upper on twin-width of graphs with bounded tree-width.
Subsequently for every j=n-1,…,n',
we will contract all the vertices of V_i
that are contained on paths in the same component of G'_i∖{Q_1,…,Q_j-1},
are in the same layer and have the same j-shadow to a single vertex in the way that we now describe.
For j=n-1,…,n',
we first contract the vertices in the same layer and with the same (j+1)-shadow
but from different subtrees of _i delimited by C_j to a single vertex,
processing one subtree of _i after another,
i.e., we first contract all pairs of such vertices contained in any two subtrees of _i delimited by C_j,
next contract to the obtained vertices the corresponding vertices contained in another subtree delimited by C_j,
then those contained in yet another subtree, etc.
When all the vertices in the same layer and with the same (j+1)-shadow from different subtrees of _i delimited by C_j
have been contracted to a single vertex,
we contract those in the same layer that have the same j-shadow.
Finally, if j>n', the vertex contained on the path Q_j is possibly contracted with another vertex (in the same layer)
if they have the same j-shadow.
In this way, all the vertices of V_i in each layer are eventually contracted to at most 2^3n' vertices.
Conclusion of subphase.
The i-th subphase concludes by contracting all the vertices of V_i in the same layer to a single vertex, and
if the vertex b_i is not the vertex with the smallest index in the set B_i' such that b_i∈ B_i',
then we contract the resulting vertices to those in the same layers that have been obtained in the subphases
associated with the vertices of B_i' that have smaller indices than b_i.
Phase II:
The graph that we obtain after Phase I has at most k+ℓ' vertices in each layer:
k of them corresponding to a_1,…,a_k and the remaining ℓ' to the sets B_1,…,B_ℓ' (see Figure <ref>).
For every i=1,…,k', we will contract all the vertices of A_i in the same layer to a single vertex as follows.
Let u_1,…,u_n be the vertices of A_i.
We first contract u_1 and u_2 in each layer proceeding from top to bottom (starting with the layer that contains both such vertices),
we then contract u_3 to the vertex created in the first step, again in each layer proceeding from top to bottom,
we then contract u_4 to the vertex created in the second step, etc.
At the end of this phase, we obtain a graph that is a subgraph of the strong product of a path and the graph H'_0.
Since the graph H'_0 has at most 2s+2 vertices, each layer now contains at most 2s+2 vertices.
Phase III:
We now contract all the vertices contained in the top layer to a single vertex,
then all the vertices of the next layer to a single vertex, etc.
Finally, we contract the vertices one after another to eventually obtain a single vertex,
starting with the two vertices corresponding to the top two layers,
then contracting the vertex corresponding to the third layer, etc.
Analysis of red degrees.
We now establish an upper bound on the maximum possible red degree of the vertices of the graphs
obtained throughout the described sequence of contractions.
We start with Phase I.
In the conclusion of the i-th subphase,
the only new red edges created during the subphase are
those among the vertices obtained from subtrees delimited by cliques C_n',…,C_n-1.
Since there are at most 2^21 possible shadows,
a vertex can have neighbors in its and the two neighboring layers, and
only two components are merged together at each time,
the red degree of these vertices does not exceed 2· 3· 2^21=3· 2^22.
At the beginning of the conclusion of the subphase,
each layer has at most 2^18 vertices obtained from contracting the vertices of V_i (note that
this bound also holds when G_i has less than eight vertices and so we proceeded directly
to the conclusion of the subphase).
The conclusion of the subphase starts with contracting these vertices to a single vertex per layer:
this can increase the red degree of at most six paths P_1,…,P_k and
the red degree of each vertex on the paths can increase by at most three.
When the subphase finishes,
each of the vertices contained in the paths P_1,…,P_k has at most 3ℓ' red neighbors, and
each of the vertices obtained by contracting the vertices of V_1,…,V_i has red degree at most 6s (since
the sum of the degrees of the vertices in each set B_1,…,B_ℓ' is at most 2s).
In particular, the red degree of each vertex contained in the paths P_1,…,P_k never exceeds 3(ℓ'+1).
We conclude that the red degree of none of the vertices exceeds
the largest of the following three bounds: 3· 2^22, 3(ℓ'+1) and 6s.
Moreover, the red degree of no vertex exceeds max{3ℓ',6s}≤ 6s+3 (recall that ℓ'≤ 2s+1)
at the end of each subphase.
During Phase II,
each vertex has at most max{k'+ℓ',2s} red neighbors in its layer and in each of the neighboring layers.
Hence, the red degree never exceeds
3max{k'+ℓ',2s}≤ 3max{2(s+1),2s}=6(s+1).
Since the number of vertices contained in each layer at the end of Phase II is at most k'+ℓ',
the maximum red degree at the end of Phase II is at most 3(k'+ℓ')-1.
Again, since the number of vertices contained in each layer at the end of Phase II is at most k'+ℓ',
the red degree of no vertex exceeds 3(k'+ℓ')-1 during the entire Phase III.
Hence, we have established that no vertex exceeds max{6(s+1),3· 2^22},
which implies the bound claimed in the statement of the theorem.
§ LOWER BOUND
The following can be readily obtained from the result of Ahn et al. <cit.> on twin-width of random graphs,
however, we include a short proof for completeness.
There exists a graph of Euler genus g that has twin-width at least √(3g/2)-O(g^3/8).
Fix g>2. It is well-known <cit.> that the complete graph of order n where n is equal to the Heawood number,
i.e.,
n=⌊7+√(1+24g)/2⌋=√(6g)+O(1)
can be embedded in any surface of Euler genus g (the reason why we excluded g=2 is that
this is not true for the Klein bottle).
Let G be the n-vertex Erdős-Rényi random graph for p=1/2,
i.e., the graph G such that any pair of vertices of G
is joined by an edge with probability 1/2 independently of other pairs.
The probability that the degree of a particular vertex differs from (n-1)/2 by more than n^3/4
is at most 2e^-2n^3/2/n-1 by the Chernoff Bound, and
the probability that the number of common neighbors of any particular pair of vertices differs from (n-2)/4 by more than n^3/4
is at most 2e^-2n^3/2/n-2.
Hence, the probability that the degree of each vertex is between n-1/2-n^3/4 and n-1/2+n^3/4 and
that the number of common neighbors of each pair of vertices is between n-2/4-n^3/4 and n-2/4+n^3/4
is at least
1-2ne^-2n^3/2/n-1-2n2e^-2n^3/2/n-2.
Assume that n is sufficiently large that this probability is positive and
fix such an n-vertex graph G.
Clearly, G can be embedded in any surface of Euler genus g as it is a subgraph of K_n.
On the other hand, the contraction of any pair of vertices results in a vertex with red degree at least
2n-1/2-2n^3/4-2n-2/4-2n^3/4-2=n/2-2-4n^3/4=√(3g/2)-O(g^3/8).
The statement of the proposition now follows.
§ ACKNOWLEDGEMENT
The substantial part of the work presented in this article was done during the Brno–Koper
research workshop on graph theory topics in computer science held in Kranjska Gora in April 2023,
which all three authors have participated in.
bibstyle
|
http://arxiv.org/abs/2307.04445v1 | 20230710095314 | Learning Behavioral Representations of Routines From Large-scale Unlabeled Wearable Time-series Data Streams using Hawkes Point Process | [
"Tiantian Feng",
"Brandon M Booth",
"Shrikanth Narayanan"
] | cs.LG | [
"cs.LG",
"eess.SP"
] |
University of Southern California
Los Angeles
CA
USA
[email protected]
University of Colorado Boulder
Boulder
CO
USA
[email protected]
University of Southern California
Los Angeles
CA
USA
[email protected]
Continuously-worn wearable sensors enable researchers to collect copious amounts of rich bio-behavioral time series recordings of real-life activities of daily living, offering unprecedented opportunities to infer novel human behavior patterns during daily routines. Existing approaches to routine discovery through bio-behavioral data rely either on pre-defined notions of activities or use additional non-behavioral measurements as contexts, such as GPS location or localization within the home, presenting risks to user privacy. In this work, we propose a novel wearable time-series mining framework, Hawkes point process On Time series clusters for ROutine Discovery (HOT-ROD), for uncovering behavioral routines from completely unlabeled wearable recordings. We utilize a covariance-based method to generate time-series clusters and discover routines via the Hawkes point process learning algorithm. We empirically validate our approach for extracting routine behaviors using a completely unlabeled time-series collected continuously from over 100 individuals both in and outside of the workplace during a period of ten weeks. Furthermore, we demonstrate this approach intuitively captures daily transitional relationships between physical activity states without using prior knowledge. We also show that the learned behavioral patterns can assist in illuminating an individual's personality and affect.
<ccs2012>
<concept>
<concept_id>10003120.10003138.10011767</concept_id>
<concept_desc>Human-centered computing Empirical studies in ubiquitous and mobile computing</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Empirical studies in ubiquitous and mobile computing
Learning Behavioral Representations of Routines From Large-scale Unlabeled Wearable Time-series Data Streams using Hawkes Point Process
Shrikanth Narayanan
=======================================================================================================================================
§ INTRODUCTION
Wearable sensors have garnered considerable interest in many fields, such as healthcare, user authentication, and entertainment, over the last two decades <cit.>. These non-obtrusive devices, which are often small in size and have efficient computational capabilities, can be extremely useful in capturing vital bio-metric and bio-behavioral data from individuals over a prolonged period in natural settings <cit.>. Such rich and vast amounts of multimodal time-series data collected directly from everyday life allow for a more comprehensive understanding of the factors affecting activities of daily living (ADLs) <cit.>, including but not limited to social interactions <cit.>, sleep patterns <cit.>, physical activities <cit.>, and even emotion variations <cit.>. Increasingly, the ability to recognize ADLs offers researchers opportunities to investigate broad human behavior patterns and infer common daily routines. Routine behavior is notably meaningful in quantifying what activity pattern people adopt and whether these patterns cause variations of psychological well-being and personality within groups of people <cit.>.
In this paper, we present a novel data processing approach, Hawkes point process On Time series clusters for ROutine Discovery (HOT-ROD) for learning routine patterns in biobehavioral time series from wearable sensors. Our proposed HOT-ROD pipeline includes data processing components ranging from aggregation, imputation, filtering, time-series clustering, and routine discovery. We show that the proposed routine features, comprised of temporally linked cluster transitions in multimodal wearable recordings, can assist in illuminating an individual's personality and affect as well as aspects of task performance such as job behaviors. The main contributions of this work are as follows:
* We propose a novel combination of Toeplitz Inverse Covariance-Based Clustering (TICC) <cit.> and Hawkes point process <cit.> for discovering routine characteristics in long-term real-world wearable recordings. The approach can operate without expert knowledge of the data or the collection of sensitive contextual information.
* Our learned routine patterns capture temporal relationships between adjacent time-series clusters, thus providing valuable insights into understanding the shared behavior patterns within a group of individuals.
* Using naturalistic heart rate and step count features from over 100 individuals in the workplace and at home over a period of ten weeks, we show that our HOT-ROD approach combined with daily summaries of physical activity from wearable sensors helps identify job properties and personalities of participants. Furthermore, we empirically show that our approach can achieve modest improvement in predicting individual attributes from a few days of recordings.
§ RELATED WORKS
Many conventional approaches in characterizing daily routines require the acquisition of labeled contexts, like trajectory in home settings <cit.>, life event sequences (eating, sleeping, etc.) <cit.>, and GPS locations <cit.>. The Kasteren data set <cit.> was collected in a 3-bedroom apartment setting for a period of 28 days using 14 state change sensors. The investigators then achieved a timeslice accuracy of 95.6% on this data set using a hidden Markov model and conditional random fields. Some following studies have successfully utilized a probabilistic neural network learning model to separate the normal routine from unusual and suspected routines <cit.>. Meanwhile, other researchers have studied human behavior by tracking the spatial properties of participants via GPS <cit.>. However, one primary concern about these studies is that the data collection protocol could invade privacy by tracking sensitive and identifiable knowledge about an individual, such as continuous GPS. These approaches might also be costly and not scalable due to the substantial amount of effort required from researchers in either setting up the recording system or annotating the data.
To prevent the acquisition of personally identifiable information, there have been a number of studies in establishing machine learning models to infer a pre-selected set of activities (walk, stand, etc.) from unlabeled wearable time-series <cit.>, like motion and posture. However, these models are typically trained from data gathered in laboratory settings and may not yield good performances on in situ data sets. As an alternative, motif-based methods have obtained empirical success in detecting repeated patterns; but this method can be computationally prohibitive due to the optimal granularity for motif patterns that must be searched through the whole time-series. Unlike the motif-based data mining method, one other recent study proposed to learn routine behaviors via a sparse and low-rank matrix decomposition technique <cit.>. In this work, the real-world physical activity data collected from Fitbit were used to cluster the behaviors of participants without expert knowledge or micro-pattern extraction. The main disadvantage associated with this approach is however the limited interpretability of decomposed matrices returned from sensor matrix operations. To our knowledge, our proposed causality-based pattern extraction from unlabeled wearable sensor recordings has not been previously considered.
§ DATASET INTRODUCTION
In this study, we used a dataset called TILES-2018 <cit.> that is publicly available for our experiment. This data set involves a set of comprehensive experiments with the target of studying how physiological and behavioral variables affect employee wellness, personality, and workplace stress. Throughout a ten-week period, physiological, environmental, and human interaction data were gathered from hospital employees who primarily provide patient care (nurses, technicians, etc.) at a large critical-care hospital. The complete dataset consisted of 213 hospital workers comprised of 120 female participants. In total, there were 113 (54.3%) registered nurses enrolled in the study, with the rest reporting some other job title, such as occupational or lab technicians. A total of 54 participants reported working the night shift and the rest worked during the day. More details regarding the dataset can be referred to <cit.>
§.§ Study Procedure
The participants involved in this study need to complete a survey during onboarding, consisting of a web-based series of surveys pulled from existing test battery questionnaires that assessed standard demographic information, personality, and affect variables. In this work, we primarily investigate how automatically discovered routine patterns correlate with job type, gender, shift type, personality, and affect. Each of the five personality factors (extraversion, agreeableness, conscientiousness, openness, neuroticism) is measured via the Big Five Inventory-2 survey <cit.>. Five-factor scores were computed by taking the average of all responses, where each factor score is in the range of 1-5. The Positive and Negative Affect Schedule (PANAS) was administered <cit.> to measure affect. The PANAS consists of 10 positive affect items and 10 negative affect items. Positive and negative affect scores were calculated by summing individual scores from each group (positive and negative) with higher scores representing higher levels of corresponding affect.
§.§ Wearable Data
In this study, researchers instructed participants to wear a Fitbit Charge 2 <cit.>, an OMsignal garment-based sensor <cit.>, and a customized audio badge <cit.> which collectively tracks heart rate, physical activity, speech characteristics, and many other human-centric signals. Participants were asked to wear the OMsignal garment and audio badge only during their work shifts due to the battery limitation of these devices. However, participants were instructed to wear a Fitbit sensor as often as possible throughout the 10-week data collection period. In the present study, we focus on the Fitbit time series data since it is present for most participants both during and outside of their working hours. This data stream offers information about energy expenditure, sleep quality, step count and heart rate measured through photoplethysmography (PPG).
§ METHOD
In this section, we introduce our HOT-ROD analysis pipeline in discovering routine features from Fitbit time-series recordings. As shown in Fig. <ref>, our proposed HOT-ROD data pipeline consists of three major modules: 1. ; 2. ; 3. . To calculate routine features using the Hawkes Point Process, we first group time-series by day. In this work, a day is defined as the variable period of time between sleep onsets as determined from the Fitbit daily summary. Sleep durations shorter than six hours (presumed to be naps) are ignored when determining these day boundaries in the time series. Some participants may sleep less than six hours regularly or may not wear their Fitbit devices to bed, which would result in measured days lasting upwards of 30 hours. To remove these outliers, we maintain the data with its length of 20-28 hours according to the definition of a circadian cycle <cit.>.
The pre-processing component includes data aggregation, data imputation, and data filtering. We first aggregate multivariate time-series data at the fixed rate of one minute. The second step uses an Autoregressive integrated moving average (ARIMA) model to fill in the missing data in the aggregated output. We then utilize the Savitzky-Golay filter to smooth the imputed time-series data without substantially distorting the signal following previous work <cit.>. Following the data pre-processing scheme, we cluster the pre-processed data stream using Toeplitz Inverse Covariance-Based Clustering (TICC) <cit.>. At the final stage, we extract routine features utilizing the Hawkes point process technique <cit.> where cluster transitions serve as the process events. The details of each module are further described below.
§.§ Data Pre-processing
Data Aggregation Fitbit Charge 2 sensors read PPG heart rate samples at intervals of less than one minute, but the time differences between two consecutive samples are inconsistent. Prior studies have suggested that PPG-based heart rate should be averaged over a one-minute duration to obtain a reliable measurement <cit.>, and thus adopt this strategy. Another compelling reason to aggregate the PPG heart rate samples is to rate-match the data output of step count samples, which are also made available every minute.
Data Imputation Missing data in wearable sensors recordings are unfortunately unavoidable and are often encountered for various reasons including intermittent disconnections, body movements, and firmware malfunctions <cit.>. In this work, we select an Autoregressive integrated moving average (ARIMA) model to impute missing values <cit.>. We utilize ARIMA to populate missing values based on past observations. Missing points that occur in the first five data points in the time series are filled with mean values from the corresponding day in the time series. Prior literature <cit.> suggests that imputation works better when the proportion of missing data is small, thus we experimentally choose to fill the missing segments that are not continuously missing over 15 minutes. In this imputation experiment, we masked 10%, 25%, and 50% of data continuously in a set 60 minutes of time series for ARIMA to impute. We choose 15 minutes since we observe that ARIMA yields significantly higher mean absolute errors when the missing data rate is 50% (30 minutes) than for missing rates at 25% (15 minutes) and 10% (6 minutes).
Data Filtering Wearable sensor recordings are vulnerable to noise and motion artifacts and thus filtering becomes an essential prerequisite for further processing of the signal. We address this issue by applying the Savitzky-Golay filter <cit.>. This is a well-known smoothing approach that increases the precision of signals while simultaneously preserving the signal trend. The filter tries to fit a polynomial with some pre-selected fixed degree z to the time-series sequence 𝐒 of length 2m + 1 centered at i = 0 such that the squared error is minimized over coefficients of a polynomial p_i = ∑_x=0^z a_xi^x:
min_a_0:z∑_i = -m ^ m (p_i - s_i) ^ 2
It is worth noting that there are many other attractive approaches available in the literature to pre-process the time-series data, but empirically testing each is beyond our scope in this work.
§.§ Time-series Clustering
Definitions We first introduce some notations and definitions used in this and later sections. The pre-processed time-series data is defined as a set of observations, (𝐬_1, 𝐬_2,..., 𝐬_n) ordered by time, where each 𝐬_i ∈𝑅^m is the i-th observation in time with m features. The sensor features in this case are PPG heart rate and step-count that are observed every minute, so m=2. We further create 𝐗_i which consists of w observations (𝐬_i, 𝐬_i+1, ... , 𝐬_i+w-1). The aim of TICC described next is to partition these sequential time-series observations to form K clusters.
Toeplitz Inverse Covariance-Based Clustering (TICC) In this method, each cluster is characterized by a Toeplitz Gaussian inverse covariance Θ_k∈𝑅^mw× mw and empirical mean μ^m. The Toeplitz Gaussian inverse covariance essentially captures the interdependencies between different observations. This clustering approach also enforces temporal consistency between consecutive vectors 𝐗_i and 𝐗_i+1 to find repeated long-range patterns in the data that represent particular behaviors of the object. In summary, the TICC method assigns each frame to one Gaussian inverse covariance Θ_k by minimizing the following objective function:
𝐏, Θminimize∑_k=1^K∑_𝐗_𝐢∈𝐏_k (-𝑙𝑙(𝐗_i, Θ_k) + β·1_𝐗_i∉𝐏_k)
where in Eq. <ref>, K is the number of clusters and 𝐏_k corresponds to the cluster point set that belongs to cluster k. 1_𝐗_j∉𝐏_k is an indicator function equal to one when the current cluster 𝐗_j is different from the future cluster assignment and zero otherwise. β is the penalty parameter that controls the temporal consistency. A larger β will result in encouraging the neighboring samples to be assigned to the same cluster. <cit.> also suggests that the performance of time-series clustering largely depends on the choice of β when the sample size is adequate. Finally, 𝑙𝑙(𝐗_j, Θ_k) represents the log-likelihood that 𝐗_𝐣 belongs to 𝐏_k, which is defined below:
-𝑙𝑙(𝐗_i, Θ_k) = -1/2(𝐗_i - μ_𝐤)^⊺Θ_k(𝐗_i - μ_𝐤)
+ 1/2logΘ_k - m/2log(2π)
where μ_𝐤 is the empirical mean of cluster k, and m is the number of features in each observations. In our context, we choose this method for time-series clustering since our interest is to robustly identify long-range repeated patterns while contending with the possible presence of irrelevant data points.
§.§ Hawkes Point Process
Hawkes processes <cit.>, also known as a self-exciting counting process, is a stochastic point process for studying event sequence patterns where historical event occurrences are assumed to increase the chance of arrival of new events. In general, a point process is a collection of a list of discrete events in time and their associate dimension, {t_i, u_i} with time t_i∈ [0, n] and event u_i∈ [0, U], where n and U are the maximum time in a time-series and total types of events, respectively. Here, we define d_i representing the clusters that assign to i-th time point. In this study, we define the event as transitions between different time-series clusters. For instance, given two possible consecutive time points {t_i, d_i} and {t_i+1, d_i+1} in a time-series of a day, we can define an event as {t_i+1, (d_i→ d_i+1)}, such that d_i≠ d_i+1. In this work, we can also find that n represents the number of total points in a time-series of days.
A multi-dimensional point process with U types of events can be equivalently represented by 𝒰 counting processes: 𝒩_u = {𝒩_u(t)| t ∈ [0, n]}, where 𝒩_u(t) denotes the number of type-u events occurring before n-th time point. There are possible K2-K types of events by the definition of the event in this work. Let the history ℋ(t) be the list of times of events up to time n:
ℋ^𝒰_n = { (t_i, u_i) | t_i < n, u_i∈𝒰}
Then, the expected instantaneous rate of occurring type-u events given history is :
λ_u(t) dt = λ_u(t|ℋ^𝒰_t)
We can then derive the intensity function λ_u(t) of the multi-dimensional Hawkes process as:
λ_u(t) = μ_u + ∑_i:t_i<tϕ_uu'(t_i)
where μ_u is the exogenous (base) intensity indecent of the history. ϕ_uu'(t) is the impact function capturing the temporal influence of type-u' event on the subsequent type-u event. We can further define ϕ_uu'(t) as:
ϕ_uu'(t) = ∑_m=1^M a_uu'^m k_m(t)
where k_m(t) is the m-th basis function and a_uu'^m represents the coefficient corresponding to k_m(t). We adopted the Gaussian filter as the basis function in this study. We describe type-u' event does not Granger-cause type-u event, when function λ _u(t) is independent of historical events of type-u'. Finally, we can build a Granger causality graph G=(𝒰, ℰ) with U types of events as the nodes and the directed edges in between representing the causation. In this study, the routine feature is naturally defined as the adjacency matrix 𝐀 of the Granger causality graph learned from the Hawkes process. Element 𝐀_uu' can be viewed as the infectivity (∫_0^tϕ_uu'(s) ds) of type-u' cluster-transition on type-u cluster-transition.
A detailed description of the above definitions can be found in <cit.>. To discover the Granger causality graph from a Hawkes point process, we adopt the learning algorithm proposed by <cit.>. This algorithm learns the Granger causality graph robustly given a few training sequences.
Summary Our proposed HOT-ROD pipeline aims to learn routine characteristics from wearable time-series without prior knowledge or the acquisition of labeled event sequences. The time series data used in this study contains continuous measurements of physiological response and physical activity. The proposed learned routine features capture patterns in how people advance from one state to another in everyday life.
§ RESULT
In this section, we validate the effectiveness of the proposed HOT-ROD approach for predicting demographic, personality and affect with Fitbit time series data.
§.§ Experimental Setup
§.§.§ Daily Summary Routine Feature
We use Fitbit daily summary data to construct our baseline model. The Fitbit daily summary data is extracted using the API provided by Fitbit. The measurements we select from Fitbit daily summary report include sleep duration, sleep efficiency, step counts, resting heart rate, and heart rate zone duration. Here, Fitbit categorizes heart rate into 4 zones:
* Out of Zone (heart rate is below 50% of its maximum).
* Fat-burn Zone (heart rate is 51% to 69% to its maximum).
* Cardio Zone (heart rate is 70% to 84% to its maximum).
* Peak Zone (heart rate is above 85% of its maximum).
A set of 5 statistical functionals (e.g., max, standard deviation) are then introduced on the Fitbit summary feature.
§.§.§ HOT-ROD Routine Feature
We compute the HOT-ROD routine features from Fitbit Charge 2 time-series that measure PPG heart rate and step count. Routine behaviors may differ between workdays and off-days. Thus we separately learn the routine features of workdays and off-days. We also observe that the data available for each participant varies (min: 2 days, max: 70 days) which can lead to biased results without proper data selection. Hence, we choose to randomly select an equal amount of days of data from workdays and off-days of a participant for our analysis. In our study, we experimentally test by picking n=5 days from each type. We choose to pick 5 days in our experiment since it ensures a reasonable amount of data per participant and also the number of qualified participants retained in the analysis. In the end, there are 101 participants retained in this experiment.
According to Section <ref>, we first aggregate PPG heart rate and step count every minute for a day of data. We impute the missing data in each aggregated time-series using the ARIMA model, where we estimate the number of time lags, the degree of difference, and the order of the moving average according to the Akaike information criterion. We fill the continuous missing segments over 15 minutes with large enough negative values. We then filter the imputed time series using an S-G filter. To minimize the deforming of the signal, we choose a small window size m = 2, and cubic order polynomials in the S-G filter. The window size parameter could be tuned systematically based on criteria and heuristics defined in <cit.>, but we leave this endeavor for future work.
Prior to time-series clustering, we perform the z-normalization of each time-series to remove variances between participants. We empirically experiment with the number of clusters K ={3, 4, 5} in TICC. We set β to be 10 in this experiment to encourage neighboring samples to be assigned to the same cluster. We want to highlight that we plan to systematically tune β in our future works. Since some input time series may still contain portions of the missing elements (the segment that is missing above 15 minutes), one cluster output is associated with "missing measurements". We decide to ignore this cluster in the following analysis procedure. Finally, we applied the Hawkes Point Process to learn the infectivity between cluster transitions.
§.§.§ Model Description
We observe that the distributions of ground truth assessments exhibited significant skew, posing a difficulty for most supervised learning algorithms, as they will be biased towards the majority group, leading to poor predictions on minority labels. Therefore, we choose to binarize the ground truth label as our final prediction target. We binarize personality, affect scores by the median split, while we categorize job type and work shift as nurses/non-nurses and day-shift/night shift, respectively. We then evaluate the efficacy of the Fitbit summary routine feature and HOT-ROD routine features extracted above by predicting binarized ground-truth labels using the Random Forest (RF) classifier. We select Random Forest models since they have considerable advantages over other techniques with respect to robustness to noise, tuning simplicity, and the ability to choose the most relevant features from high-dimensional data input, where many features are often redundant <cit.>. Specifically, we perform the predictions using three sets of features: 1. Fitbit summary routine feature; 2. HOT-ROD routine feature; 3. Fitbit summary routine feature and HOT-ROD routine feature. We perform 5-fold cross-validation and report the average results in the macro-F1 score. We grid search the hyper-parameters in the RF model as follows: 1. Number of estimators: [10, 20, 30]; 2. Feature selection criterion: ["gini", "entropy"]; 3. Max depth of the tree: [4, 5, 6]; 4. Minimum samples to split: [2, 3, 5].
§.§ Prediction Results
The experimental results for predicting IGTB assessment (personality and affect) and demographic information (job type, shift) are listed in Table <ref> and Table <ref> respectively. For predicting demographics, HOT-ROD features combined with Fitbit summary routine features achieve the best performance, with a better F1 score in predicting work shift than only using either the Fitbit summary feature or HOT-ROD routine features. We also observe that HOT-ROD reaches the best performance in forecasting neuroticism when the assigned number of clusters is 4. HOT-ROD features can also outperform Fitbit summary and combined feature set in classifying conscientiousness and openness when number of cluster is 3. Combining feature sets achieves the best performance in determining extraversion, agreeableness, and affect related variables.
§ DISCUSSIONS
The results in Table <ref> and Table <ref> demonstrate foremost that personality and affect prediction from this data (collected in a natural setting outside of a well-controlled lab) is considerably challenging using standard physiologic features and simple machine learning techniques since none of the validation scores can reach above 70% even when the label is binarized. It also suggests that routine patterns, derived from wearable recordings by our HOT-ROD analysis, modestly improve performance. The prediction results demonstrate the HOT-ROD feature work comparatively reliably when the number of clusters is below 4. This may be because the available points for each cluster transition in a day decrease dramatically when the number of clusters increases, leading to an imprecise estimation of the Granger graph from time-series event.
We further identify that HOT-ROD features yield substantially better performances in predicting conscientiousness, which is known for measuring self-discipline. Increasingly, HOT-ROD routine features combined with Fitbit summary routines are better predictors of job type and shift type. These prediction results demonstrate that HOT-ROD routine features are aligned quite well with the job type and self-discipline which closely relate to human behavior. In the end, we observe that the HOT-ROD features to produce a noticeably better F1-score in classifying positive affect. Though it is easy to find that the HOT-ROD pipeline can capture routine features to predict human behaviors using only a few days of data, the prediction results are related to the number of clusters assigned in TICC. It draws attention to the fact that more data are needed to generate a reliable estimation of routine behavior.
To better understand the learned routine features in the case when the number of clusters K = 4, we further infer an interpretation for each cluster as follows: 1. Rest activity; 2. Light activity; 3. Moderate activity or exercises; 4. Missing measurements.
Fig <ref> displays the infectivity matrix for various cluster transitions of high and low conscientious groups. There are 13 noticeable causality relations between cluster transitions, while none of the cluster-transition events have obvious self-triggering patterns. This implies that most events don't have a periodic daily behavior. Both groups behaved similarly on workdays, while Light activity → Rest transitions are more likely to be triggered by Rest → Light activity transitions in low conscientious population on off-days. Additionally, Rest → Moderate activity transitions also tend to impact Moderate activity → Light activity transitions in low conscientious population on off-days. These causal relations indicate that the low conscientious population tends to be more sedentary and less active on days of not working. This finding is consistent with the prior work stated in <cit.> and <cit.>. Consistent with the feature importance returned from the random forest model, we also observe that these learned causal relations offer important information in predicting conscientious type. Similarly, we also observe that the Rest → Moderate activity is more likely to be triggered by the Moderate activity → Rest on off-days in the group with the high level of positive affect scores.
§ CONCLUSION
We propose a technique, HOT-ROD, for discovering routine patterns in wearable sensor time-series data utilizing the Granger causality graph extracted from a time-series of cluster-transition events. Using a data set of over 100 participants working in a hospital environment for ten weeks, we show that this data-driven technique intuitively captures transitional behaviors between activity states in a manner consistent with personality without using any prior knowledge. We have also shown that routine features extracted from HOT-ROD combined with routine features derived using Fitbit daily summary information modestly improve the performance in predicting job type, work shift, extraversion, agreeableness and affect variables than using a single set of features.
As the next step, we want to examine how the switching penalty β impacts the learned routine behaviors. Increasingly, we believe the performance of the proposed technique will further increase when more data is available, but we also hope to systematically study the amount of data required to learn the robust routine behavior patterns using our approach. In addition, we believe this technique is simple enough to generalize to other data sets, possibly more broadly than wearable sensor readings. Ultimately, we plan to compare our method with other popular time-series approaches presented using time-series clustering <cit.>, motif finding <cit.>, and deep representation learning <cit.>.
§ ACKNOWLEDGEMENT
This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2017-17042800005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.05165v1 | 20230711104933 | The $φ^4$ lattice model with cubic symmetry in three dimensions: RG-flow and first order phase transitions | [
"Martin Hasenbusch"
] | hep-lat | [
"hep-lat",
"cond-mat.stat-mech",
"hep-th"
] |
Institut für Theoretische Physik, Universität Heidelberg,
Philosophenweg 19, 69120 Heidelberg, Germany
We study the 3-component ϕ^4 model on the simple cubic lattice
in presence of a cubic perturbation. To this end, we perform
Monte Carlo simulations in conjunction with a finite size scaling
analysis of the data. First we identify the line of slow flow.
The analysis of the RG-flow on this line provides us with the
accurate estimate Y_4 - ω_2 =0.00081(7) for the difference
of the RG-eigenvalue Y_4 at the O(3)-invariant fixed point and
the correction exponent ω_2 at the cubic fixed point.
Field theory predicts that depending on the sign of the
cubic perturbation, the
RG-flow is attracted by the cubic fixed point, or runs to an ever
increasing amplitude, indicating a fluctuation induced first
order phase transition. We demonstrate directly the
first order nature of the phase transition for a sufficiently strong
breaking of the O(3) symmetry. We obtain accurate results for
the latent heat, the correlation length in the disordered phase
at the transition temperature and the interface tension
for interfaces between one of the ordered phases and the
disordered phase. We study how these quantities scale with the
RG-flow, allowing quantitative predictions for weaker
breaking of the O(3) symmetry.
The ϕ^4 lattice model with cubic symmetry in three dimensions:
RG-flow and first order phase transitions
Martin Hasenbusch
August 12, 2023
===========================================================================================================
§ INTRODUCTION
We study the ϕ^4 model with a cubic anisotropy in three dimensions.
We focus on the case of N=3 components of the field, which is
particularly interesting, since it is the experimentally most relevant case and
the cubic perturbation is very close to marginal at the O(3)-invariant
fixed point.
The model has been studied intensively over the last five decades,
using field theoretic methods such as the ϵ-expansion and perturbation
theory in three dimensions fixed. For a review see for example Sec. 11.3 of
ref. <cit.>.
Note that in structural transitions, in addition to N=3,
N=4 might be experimentally realized <cit.>.
Recently the ϵ-expansion has been extended to 6-loop <cit.>.
Based on this, a huge set of operator dimensions has been computed in
ref. <cit.>.
In the field theoretic setting,
the reduced continuum Hamiltonian with two quartic couplings
H = ∫^d x {1/2∑_i=1^N [(∂_μϕ_i)^2
+r ϕ_i^2] + 1/4!∑_i,j=1^N (u + v δ_ij)
ϕ_i^2 ϕ_j^2 } ,
where ϕ_i is a real number, is studied,
(see for example eq. (11.10) of ref. <cit.>). Flow equations in a
two-dimensional parameter space (u,v) are discussed. For v=0, the theory
is O(N)-symmetric, while for finite v, the theory has only cubic symmetry.
The qualitative features of the flow are well understood.
There are four fixed points:
The Gaussian (u,v)=(0,0), the decoupled Ising (0,v^*),
the O(N)-symmetric (u,v)=(u^*,0) and the
fixed point with cubic symmetry only (u,v)=(u_c,v_c), where v_c>0.
The Gaussian and the decoupled Ising fixed points are unstable for all values
of N and N>1, respectively. The O(N)-symmetric fixed point is unstable
for N ≥ N_c in one direction, breaking the O(N) invariance.
Recent field theoretic estimates give robustly N_c slightly smaller than 3.
The result N_c<3 is supported by the fact that in a finite size
scaling analysis of Monte Carlo data for the improved ϕ^4 model on the
simple cubic lattice the authors find Y_4 = 0.013(4) for
N=3 <cit.>. In ref. <cit.> Y_4=0.0142(6) had been obtained.
The rigorous bound Y_4 > 3-2.99056 for N=3 was recently established
by using the conformal bootstrap (CB) method <cit.>. Note that Y_4 is
the RG-exponent of the cubic perturbation and Y_4>0 means that the
perturbation is
relevant and hence the RG fixed point is unstable. The cubic fixed point is
stable for N>N_C and for v>0 the flow runs into the cubic fixed point.
On the contrary, for v<0, the flow runs to ever larger violations of the
O(N)-symmetry and no fixed point is reached. Instead a fluctuation induced
first order phase transition is expected.
In ref. <cit.> we performed large scale Monte Carlo simulations of a
lattice version of the ϕ^4 model. We have studied the cases N=3 and
4. In ref. <cit.> we focus on the neighborhood of the fixed points.
By using finite size scaling, we computed accurate estimates of
critical exponents for the cubic fixed point. In the case N=3 these differ
only by little from their O(N)-symmetric counterparts.
In the present work, we extend the study of the flow towards stronger violations
of the O(3) symmetry. On the one hand, we make contact with the decoupled
Ising fixed point. On the other hand, for v<0, for large violations of the
O(3) symmetry, we demonstrate directly the first order nature of the
transition. In our simulations we determine characteristic quantities such as
the latent heat, the correlation length at the transition temperature,
and the interface tension between the disordered and one of the ordered phases.
We study how these quantities scale with the RG flow. This way, quantitative
predictions can be made for all v<0.
The outline of the paper is the following: In the next section we define
the model and the observables that we measure.
In section <ref> we extend the study of the RG-flow of ref.
<cit.> towards a stronger breaking of the O(3) invariance. First
we identify the line of slow flow. Then the flow on this line is studied.
In Sec. <ref> we discuss our numerical results for the first
order transitions. Finally, in Sec. <ref> we summarize and conclude.
§ THE MODEL AND OBSERVABLES
Here we study the same reduced Hamiltonian and observables as in
ref. <cit.>. For completeness let us recall the definitions.
We study a discretized version of the continuum Hamiltonian (<ref>),
which is considered in field theory.
We extend the reduced Hamiltonian of the ϕ^4 model on a simple cubic
lattice, see for example eq. (1) of ref. <cit.>,
by a term proportional to
∑_a Q_4,a a a a(ϕ⃗ ) = ∑_aϕ_x,a^4
- 3/N+2( ϕ⃗_x^ 2)^2 ,
with cubic symmetry, breaking O(N) invariance.
Note that Q_4 is the traceless symmetric combination of four instances of
the field, see for example eq. (7) of ref. <cit.>. We get
H({ϕ⃗ })= -β∑_<xy>ϕ⃗_x ·ϕ⃗_y
+ ∑_x [ ϕ⃗_x^ 2 + λ (ϕ⃗_x^ 2 -1)^2
+ μ (∑_aϕ_x,a^ 4
-3/N+2( ϕ⃗_x^ 2)^2 ) ]
,
where
ϕ⃗_x is a vector with N real components.
The subscript a denotes the components of the field and
{ϕ⃗ } is the collection of the fields at all sites x.
We label the sites of the simple cubic lattice by
x=(x_0,x_1,x_2), where x_i ∈{0,1,…,L_i-1}. Furthermore,
<xy> denotes a pair of nearest neighbors on the lattice.
In our study, the linear lattice size L=L_0=L_1=L_2 is equal in all
three directions throughout. We employ periodic boundary conditions.
The real numbers β, λ and μ are the parameters of the
model. Note that here λ and μ take over the role of the
parameters u and v of the continuum Hamiltonian.
In eq. (<ref>) the components of the field
decouple for λ -3/N+2μ=0. Since the term
∑_x ϕ⃗_x^ 2 has the factor (1-2 λ) and
∑_x ∑_a ϕ_x,a^4 the factor μ = N+2/3λ
in front, a rescaling of the field ϕ_x is needed to match with the
Hamiltonian
H({ϕ})= -β̃∑_<xy>ϕ_x ϕ_y
+ ∑_x [ ϕ_x^2 + λ̃(ϕ_x^2 -1)^2
]
,
considered for example in ref. <cit.>, where ϕ_x is a real
number. We arrive at the equations
(1-2 λ) = (1-2 λ̃) c , N+2/3λ = λ̃ c^2
and hence
6/N+2λ̃ c^2 + (1-2 λ̃) c -1 =0
with the solutions
c = -(1-2 λ̃) ±√((1-2 λ̃)^2
+24/N+2λ̃)/12/N+2λ̃ ,
where we take the positive solution. Plugging in λ̃^*=1.1(1)
<cit.> we arrive at c=1.436(15) for N=3. Note that
λ̃^* denotes the value of λ̃, where leading
corrections to scaling vanish. Hence we get for the improved decoupled
model
λ^*_DI=1.36(15) and μ^*_DI = N+2/3λ^*_DI =2.27(25).
§.§ The observables and dimensionless quantities
Dimensionless quantities or phenomenological couplings play a central
role in finite size scaling.
Similar to the study of O(N)-invariant models, we study
the Binder cumulant U_4, the ratio of partition functions Z_a/Z_p and
the second moment correlation length over the linear lattice size
ξ_2nd/L. Let us briefly recall the definitions of the observables
and dimensionless quantities that we measure.
The energy of a given field configuration is defined as
E= ∑_<xy>ϕ⃗_x ·ϕ⃗_y .
The magnetic susceptibility χ and the second moment correlation length
ξ_2nd are defined as
χ≡1/V⟨(∑_x ϕ⃗_x )^2 ⟩ ,
where V=L^3 and
ξ_2nd≡√(χ/F-1/4 sin^2 π/L) ,
where
F ≡1/V⟨|∑_x exp(i 2 π x_k/L)
ϕ⃗_x |^2
⟩
is the Fourier transform of the correlation function at the lowest
non-zero momentum. In our simulations, we have measured F for the three
directions k=0,1,2 and have averaged these three results.
The Binder cumulant U_4 is given by
U_4≡⟨ (m⃗^2)^2 ⟩/⟨m⃗^2⟩^2,
where m⃗ = 1/V ∑_x ϕ⃗_x is the
magnetization of a given field configuration. We also consider the ratio
R_Z≡ Z_a/Z_p of
the partition function Z_a of a system with anti-periodic boundary
conditions in one of the three directions and the partition function
Z_p of a system with periodic boundary conditions in all directions.
This quantity is computed by using the cluster algorithm.
For a discussion see Appendix A 2 of ref. <cit.>.
In order to detect the effect of the cubic anisotropy we
study
U_C = ⟨∑_a Q_4,aaaa(m⃗) ⟩/⟨m⃗^ 2⟩^2 .
In the following we shall refer to the RG-invariant
quantities U_C, U_4, Z_a/Z_p and ξ_2nd/L by
using the symbol R.
In our analysis we need the observables as a function of β in
some neighborhood of the simulation point β_s. To this end we have
computed the coefficients of the Taylor expansion of the observables
up to the third order.
In the case of decoupled systems, λ -N+2/3μ=0, we can express
the dimensionless quantities introduced above in terms of their Ising
counterparts.
For example
U_C = N-1/N (N+2) (U_4,Ising - 3) .
Hence we get for the fixed point value, which is indicated by ^*
U_C,DI^*= (1.60359(4) -3) N-1/N (N+2) = - 1.39641(4)
N-1/N (N+2)
using the result of <cit.> for U_4,Ising^*. Furthermore
(Z_a/Z_p)^*_DI = ((Z_a/Z_p)^*_Ising)^N, U_4,DI^* =
1/N U_4,Ising^* + N-1/N, and
(ξ_2nd/L)^*_DI = (ξ_2nd/L)^*_Ising, where the subscript
DI indicates the decoupled Ising fixed point.
§ FINITE SIZE SCALING STUDY OF THE RG-FLOW
As first step we repeat the analysis of section VII of ref. <cit.>
with more data. We have added new pairs of (λ,μ), in particular
for relatively large values of |μ|. Furthermore,
for pairs (λ,μ) already studied in ref. <cit.>
we improved the statistics and added larger lattice sizes.
We identify the line of slow flow in
the (λ,μ) plane. This can be viewed as a generalization of
identifying improved models. To this end, we analyze our data for
dimensionless quantities by using the Ansatz (<ref>) discussed below.
Then we study the flow on this line, where we focus on the behavior of
U_C.
§.§ Locating the line of slow RG-flow
At least for small values of μ, the RG-flow is extremely slow along
a certain line, and therefore it is a good approximation to treat it, in some respect, as
a line of fixed points. This results in the Ansatz, eq. (35) of ref. <cit.>,
R_i(β_c,λ,μ,L) =R_i^* + r_i,4 w_4(λ,μ) L^-ω
+ ∑_m=2^m_max c_i,m U_C^m(β_c,λ,μ,L)
+ ∑_j a_i,j L^-ϵ_j
for dimensionless quantities that are O(μ^2). These O(μ^2) contributions
are taken into account by the term
∑_m=2^m_max c_i,m U_C^m(β_c,λ,μ,L). Note
that U_C is O(μ). In our approximation, we assume that the flow
can be separated into a slow and a fast part that do not mix. Furthermore,
the exponent ω is assumed to be constant, taking the value of
the O(3)-invariant fixed point. Terms proportional to L^-2 ω
and higher are not taken into account, since for our data the correction
is small. The last term of eq. (<ref>) takes into account subleading
corrections with ϵ_j ⪆ 2. Also these correction
exponents are assumed to be constant.
In the fits performed here,
we took two exponents ϵ_1=2-η and ϵ_2=2.023, corresponding
to the analytic background of the magnetization and the breaking of the
rotational invariance by the lattice. In the case of the analytic background,
we assume that the coefficients depend linearly on λ and quadratically
on μ. The coefficients for the breaking of the rotational invariance are
taken as a constant.
The Ansatz (<ref>) is used for joint fits of Z_a/Z_p,
ξ_2nd/L, and U_4.
In a first series of fits, we took w_4(λ,μ) as a free parameter
for each value of (λ,μ). We performed a number of fits, varying
the range of μ that is taken into account, the maximal power
m_max of U_C and, as usual, the minimal linear lattice size L_min
taken into account.
The different sets of data that we analyzed are mainly characterized by the
range of μ that is taken. For the smallest set of data, |μ| ≤ 1.2
is taken, while for the largest -1.8 ≤μ≤ 2.2 is taken. The largest
set contains 64 different pairs of (λ,μ).
For negative μ we used the additional cut U_C ⪅ 0.4.
As a result, for (λ,μ)=(3.4,-1.8), (3.0,-1.663) and (2.7,-1.552)
only linear lattice sizes up to L=48 are used in the fit.
In the case of our largest data set, we performed fits with m_max≤ 9.
For m_max=9, we get χ^2/DOF=1.070, 1.045, and 1.002 for
L_min=20, 24, and 28, respectively.
The corresponding p-values are 0.049, 0.154, and 0.478.
For smaller ranges of μ, acceptable fits were obtained already for smaller
L_min. For example for L_min=16, |μ| ≤ 1.2, and m_max=6
we get χ^2/DOF=1.052, corresponding to p=0.103.
Note that in ref. <cit.> we have used m_max=5 at most and
fitted data for |μ| ≤ 1.
Next we used the parameterizations
w_4(λ,μ) = a (λ - λ^* - c μ^2 -d μ^3)
(1+e (λ-5.0))
and
w_4(λ,μ) = a (λ - λ^* - c μ^2 -d μ^3 -e μ^4)
(1+f (λ-5.0) )
for the correction amplitude.
Here acceptable fits were only obtained for data sets with a range
up to 1.5 ≤μ≤ -1.566.
Various acceptable fits, using eq. (<ref>), are consistent with
the estimates
λ^* = 5.12(5), c=-0.8(1), d=0.06(1), and e=0.01(4) for the
line of slow flow.
In Fig. <ref> we plot the line of slow flow as characterized
by eq. (<ref>) with the numerical values of the parameters
given above. In addition we plot the pairs of (λ,μ) we have simulated
at. The pairs of (λ,μ) with a small
correction amplitude w_4 are shown as solid circles.
Here, small means that in fits without
parameterization of w_4, the modulus of the value of w_4 is at most a few
times the error of w_4. The improved point of the decoupled Ising system
is obtained from λ̃^*=1.1(1) for the one-component
ϕ^4 model on the simple cubic lattice <cit.>
as discussed in section <ref>.
Finally the pairs of (λ,μ), where, below in section <ref>,
we demonstrate directly that the transition is first order, are plotted.
Our results for the dimensionless quantities are fully consistent with those
obtained in ref. <cit.>. The estimates of c_i,m, summarized
in table <ref>, are more accurate now. The error bars are taken
such that the results of five different acceptable fits are covered.
We give results up to m=4. For larger m,
the values of c_i,m differ substantially between different fits.
Selected estimates of the inverse transition
temperature β_c are provided as supplementary material.
§.§ Flow equation for U_C
We consider U_C as a phenomenological coupling. We study how
U_C, at the transition, behaves as the linear lattice size L is varied.
Here we borrow ideas from ref. <cit.>, where the flow of a
dimensionless quantity in an asymptotically free theory is discussed.
To stay in the neighborhood of the transition,
for each lattice size L, we take U_C at β_f, where
β_f is chosen such that an other dimensionless quantity R_i assumes
a certain value R_i,f: R_i(β_f)=R_i,f. U_C at β_f is
denoted by U_C in the following.
For convenience we take the ratio of partition functions Z_a/Z_p as
second dimensionless quantity. Our
choice for the fixed value is (Z_a/Z_p)_f=0.19477, which is the
estimate of the fixed point value for the Heisenberg universality class
<cit.>. Note that for any value of R_i,f in the range of R_i,
for a second order phase transition, β_f converges to β_c
as the linear lattice size L increases. For R_i,f=R_i^* the convergence
is the fastest.
Motivated by the results of the previous subsection, we furthermore
consider
R̃_i(β_f) = R_i(β_f) - ∑_j=2^m_max c_i,j U_C^j(β_f) = R_i,f ,
where c_i,j and m_max are fixed. The idea of using eq. (<ref>)
is that, in particular for large values of |U_C|, the convergence
of β_f with increasing L is improved. As above, we
take the ratio of partition functions Z_a/Z_p as dimensionless quantity and
(Z_a/Z_p)_f = 0.19477. In our numerical analysis we use m_max=6.
The values c_Z_a/Z_p,2=-0.61 and c_Z_a/Z_p,3=2.1 are taken
from the fits discussed above, while
c_Z_a/Z_p,4=-2.9, c_Z_a/Z_p,4=-10, and c_Z_a/Z_p,4=20.8
are chosen such that for large |U_C| certain requirements to
be discussed below are fulfilled.
Before we proceed with the numerical analysis,
let us discuss the two limiting cases:
* The first order transition and a linear lattice size L ≫ξ_high,
where ξ_high is the correlation length in the high temperature phase
at the transition temperature.
* The decoupled Ising system and its neighborhood.
The value of U_C at the first order transition, for large lattice
sizes, can be obtained as follows: In the high temperature phase, the
fluctuations of the order parameter are Gaussian and hence
U_C vanishes. In the low temperature phase, the order parameter
assumes a finite value m for one component only. Hence
U_C = (m^4- 3 m^4/N+2)/m^4 = N-1/N+2
for the low temperature phase.
Taking into account that the different phases have the same weight
at the transition temperature,
we arrive at U_C = 2 N (N-1)/(2 N+1) (N+2)
at the transition temperature. Note that the ordered, low temperature,
phase is 2 N fold degenerate.
The ratio of partition functions assumes the value Z_a/Z_p=0 and Z_a/Z_p=1
in the limit L →∞ in the low and the high temperature phase,
respectively. Taking into account the degeneracy of the ordered phase,
we arrive at Z_a/Z_p=1/(2 N+1).
The numerical results discussed below in Sec. <ref> show
that at the inverse
transition temperature β_t, with increasing linear lattice size L,
these limiting values are not approached monotonically. One finds a small
under- and overshooting for Z_a/Z_p and U_C, respectively.
Studying the flow of U_C, we stay in a range of L, where it is
monotonic.
In section <ref>, eq. (<ref>), we express U_C of the
decoupled Ising system in terms of the Binder cumulant U_4 of a single
Ising system. The fixed point value is given in eq. (<ref>).
We can apply eq. (<ref>) to other constraints on the system. For
example fixing Z_a/Z_p=0.19477 in the decoupled Ising system corresponds
to Z_a/Z_p=0.19477^1/3=0.5796609... in a single Ising system.
Reanalyzing our data obtained in connection with ref. <cit.> we
get U_4 ≈ 1.6602 for (Z_a/Z_p)_f=0.5796609,
corresponding to
U_C ≈ -0.1786 for the decoupled Ising system.
Now we are in the position to discuss how the
coefficients c_Z_a/Z_p,4, c_Z_a/Z_p,5, and c_Z_a/Z_p,6 in
eq. (<ref>) are chosen. Let us view
(Z_a/Z_p)_mod,f(U_C) =
∑_j=2^m_max c_Z_a/Z_p,j U_C^j + (Z_a/Z_p)_f
as a function of U_C. The coefficients are chosen such that
(Z_a/Z_p)_mod,f(U_C)
* is monotonically
decreasing with increasing U_C in the range 0 < U_C ⪅ 0.4.
* Assumes roughly the numerical value found for Z_a/Z_p at the transition
temperature for U_C ≈ 0.4.
* is monotonically decreasing with decreasing U_C in the range 0 > U_C ≥
U_C,DI.
* Assumes the decoupled Ising value of Z_a/Z_p for U_C,DI.
The flow of U_C is characterized by
u(U_C)=
1/U_CU_C/ln L .
Here we have introduced the factor 1/U_C
for numerical convenience. Let us first discuss the behavior of u
in the neighborhood of the decoupled Ising fixed point.
The decoupled Ising fixed point is unstable with an RG-exponent
y = α_I y_t,I = 2 y_t,I - d =
d - 2 Δ_ϵ, I = 0.17475(2) ,
where α_I and y_t,I are the specific heat and the thermal
RG-exponent of the three-dimensional
Ising universality class, respectively <cit.>.
The numerical value of the scaling dimension
Δ_ϵ = 1.412625(10) is taken from ref.
<cit.>. In the neighborhood of
the decoupled Ising fixed point, U_C,DI behaves as
U_C(L) = U_C,DI + ϵ_0 (L/L_0)^y + ... .
Hence
u(U_C,DI +ϵ_0) ≈U_C,DI^-1ϵ_0 (L/L_0)^y-ϵ_0/Δln L≈U_C,DI^-1ϵ_0 y Δln L/Δln L
= U_C,DI^-1ϵ_0 y ,
where Δln L =ln L - ln L_0.
In order to keep corrections to scaling small, we analyze data obtained
for (λ,μ) close to the line of slow flow.
In ref. <cit.> we estimated
u by fitting data for fixed (λ,μ) by using the Ansatz
U_C(λ,μ,L) = a L^u
or as check
U_C(λ,μ,L) = a L^u (1+c L^-2)
for some range L_min≤ L ≤ L_max. As argument of u we took
[U_C(L_min) + U_C(L_max)]/2.
The approximations (<ref>,<ref>)
rely on the fact that U_C varies only little
in the range of linear lattice sizes considered.
For (λ,μ), where U_C changes considerably
over the range of lattice sizes L that we simulate, we now take instead
u([U_C(L_2)+U_C(L_1)]/2) =
2/U_C(L_2)+U_C(L_1)U_C(L_2)-U_C(L_1)/log(L_2/L_1)
as approximation. Here L_1 and L_2 are lattice sizes we simulated at and
L_2 is the smallest with L_2 > L_1.
In the following we discuss our numerical results obtained by using
eq. (<ref>) to get β_f. First we check that estimates of u,
eq. (<ref>),
obtained for different values of (λ,μ) fall on a unique curve,
up to small deviations that can be interpreted as corrections.
First we compare the estimates
obtained from two different pairs of (λ,μ) that give approximately
the same values of U_C for the same lattice size L.
The leading correction should differ between these two pairs.
In Fig. <ref> we plot our estimates of u for
(λ,μ)= (3.4,-1.566) and (3.0,-1.449). For example, from the fit
with the Ansatz (<ref>), taking m_max=9, and
our largest set of data, we get for L_min=24 the
estimates w_4=-0.0038(5) and 0.0017(5), respectively. In particular
the difference between these two estimates of w_4 is very stable when
varying the parameters of the fit. In both cases, estimates of u
obtained from the linear lattice sizes L=12, 16, 24, 32, 48, 64,
and 96 are shown.
We find that the results obtained for (L_1,L_2) = (12,16) and (16,24)
for the two pairs of (λ,μ)
clearly differ by more than the statistical error. However, the difference
is small compared with the value of u.
In Fig. <ref> we plot our estimates of u
computed by using eq. (<ref>)
obtained for 5 different pairs of (λ,μ), which are approximately
on the line of slow flow. For small linear lattice sizes, we expect
that subleading corrections are the numerically dominant corrections.
It is quite clear from the plot
that estimates obtained from (L_1,L_2)=(12,16) are too large compared with
the asymptotic value. For (L_1,L_2)=(24,32) the inspection by eye does not
show such a deviation, suggesting that corrections to scaling are at
most at the level of the statistical error at this point.
In Fig. <ref> we plot estimates of u obtained by using
eq. (<ref>) with L_min=24 and eq. (<ref>) as a
function of
U_C. Furthermore we give u=0 for the decoupled Ising
point at U_C=-0.186188(5) and the behavior of u in the
neighborhood of the decoupled Ising point. For
|u| ⪅ 0.1 the estimates obtained by using
eq. (<ref>) and eq. (<ref>) are consistent. For
u>0.15 we see clear differences. For small |u|, the statistical
error is quite large for eq. (<ref>). This is partially due to the
fact that we simulated for more values of L in the same range of lattice
sizes than for larger |u|.
The behavior for U_C ⪅ -0.15 seems to be
consistent with the predictions for the decoupled Ising system and its
neighborhood.
We analyze the numerical results by fitting with the Ansatz
u = ∑_i=0^n a_i U_C^i ,
where we have taken n=3, 4 and 5 here. In our preliminary analysis,
we experimented with various approaches. For example we combined data
for |μ| ≤ 0.6 analyzed by using eq. (<ref>) with
data for |μ| > 0.6 analyzed by using eq. (<ref>).
In our final analysis, we use for simplicity only data with
|μ| ≥ 0.6 analyzed by using eq. (<ref>).
In the fit, the covariances that are caused by the fact that the
numerical result for U_C(L) might appear in two differences,
one with a smaller and one with a larger lattice size, are taken into
account.
In our largest set of data we included (λ,μ)=(2.7,-1.552),
(3.0,-1.449),
(3.7,-1.33), (3.8,-1.2), (4.0,-1.2), (4.2,-1.1), (4.3,-1.0),
(4.4,-0.9), (4.5,-0.8), (4.7,-0.7), (4.7,-0.6), (4.7,0.6),
(4.7,0.7), (4.5,0.8), (4.3,1.0), (4.0,1.2), (3.4,1.5),
(2.6,1.9), (2.2,2.0), (1.9,2.1), and (1.65,2.2). In the case of
(λ,μ)=(2.7,-1.552) we skipped the lattice sizes L>64, since
the value of U_C is too large. For example, fitting with n=5,
and L_1,min=48 we get χ^2/DOF=0.965 corresponding to p=0.524 and
a_0=0.01441(66), a_1=0.8136(72), a_2=2.004(55), a_3=-8.7(4),
a_4=12.9(1.9), and a_5=-18.1(2.5). Furthermore
Y_4-ω_2=0.00081(6) and U_4^* = -0.0186(8). Note that
U_4^* is obtained from a numerical zeropoint search of u and
ω_2 is obtained from the slope of u at the zeropoint.
In order to estimate the effects of corrections, we varied the minimal
lattice size L_1,min that is taken into account. We used L_1,min =24,
32 and 48. Furthermore we varied the maximal |U_C| and |μ| that
is taken into account. Our final results and their error bars
are chosen such that the results of four different acceptable
fits are covered. We get
Y_4=a_0=0.0141(10),
a_1 =0.823(17),
a_2=2.21(26),
a_3=-9.6(1.3),
U_C^* = -0.0181(14),
ω_2=0.0133(9), and Y_4-ω_2=0.00081(7), which we consider
as our final results. Note that our results are fully consistent with those
of ref. <cit.>. Here we are more conservative estimating errors.
We repeated the analysis, replacing (λ,μ)=(3.7,-1.33),
(3.8,-1.2), (4.0,-1.2), (4.2,-1.1), and (4.3,-1.0) by
(λ,μ)=(3.0,-1.663), (3.4,-1.8), (3.7,-1.5), (4.0,-1.2),
and (4.5,-1.0). For these values of (λ,μ) the amplitude |w_4|
of corrections is larger than for the replaced ones. The results do not
change significantly. Finally, we have repeated the analysis for
U_C defined by (Z_a/Z_p)(β_f)=0.19477 instead of
eq. (<ref>). We get fully consistent results, with slightly larger
error bars.
We conclude that the precise definition of β_f is not crucial.
§.§ matching
For two pairs of parameters (λ_1,μ_1) and (λ_2,μ_2) we determine
a scale factor c by requiring that
U_C,1(L) = U_C,2(c L) ,
where the second subscript indicates the pair of parameters.
This is solved numerically for each linear lattice size that we simulated
for parameter pair one.
In a first step, for μ < 0,
we determine two lattice sizes L_1 and L_2 for the
second parameter pair such that L_2 is the smallest linear lattice size
simulated such that U_C,1(L) ≤U_C,2(L_2)
and L_1 the largest such that
U_C,1(L) ≥U_C,2(L_1).
If such a pair of lattice sizes exists, we interpolate
U_C,2 linearly
in the logarithm of the linear lattice of the second parameter pair.
In table <ref>, we give the results of the matching for
(λ_1,μ_1)=(2.333,-1.764) and (λ_2,μ_2)=(2.7,-1.552).
Note that for (λ,μ)=(2.333,-1.764)
we find below in Sec. <ref> that ξ_high = 24.70(2) at the
transition temperature. Furthermore, for (λ,μ)=(2.333,-1.764)
we reach linear lattice sizes, where U_C(L)
becomes nonmonotonic. We get U_C = 0.39042(10),
0.41910(4), 0.45781(5), 0.48359(5), 0.51161(6), 0.50655(12)
for L=12, 16, 24, 32, 48, and 64. The linear lattice sizes
given in table <ref>, are still in the range, where
U_C(L) monotonically increases with the linear
lattice size L.
We find that c changes only little with
increasing L. It seems plausible that for L=32 systematic errors
are at most of the same size as the statistical error given
in table <ref>.
As a consistency check we performed the matching for
(λ_1,μ_1)=(2.0,-1.85) and (λ_2,μ_2)=(2.333,-1.764).
Here we get
c=2.0165(30), 2.0168(14), and 2.0139(15) for
L=12, 16, and 20, respectively. This can be compared with the
ratio 24.70(2)/12.135(6)= 2.0354(19)
of the correlation length in the high temperature phase computed below
in Sec. <ref>.
We continued this matching for pairs (λ_1,μ_1) and
(λ_2,μ_2) that are approximately
on the line of slow flow. In table <ref> we report our final
results for the matching factor c. The error bar includes a rough estimate
of the systematic error, obtained from the variation of c with increasing
L. Based on our simulations, we can not proceed to μ > -1, since
we have no pairs of (λ,μ) at hand that have overlapping ranges
of U_C.
In the third column, we give an estimate
of the correlation length in the high temperature phase at the transition
temperature. We start from the direct estimate obtained for
(λ,μ)=(2.333,-1.764) in Sec. <ref> below.
Then we multiply up
the values for c. The error bar is simply computed by adding up the
error due to the previous estimate of ξ and the one due to the uncertainty
of the current value of c. This is done since we do not know how the errors
are correlated.
Going to μ>-1 we can evaluate the flow, eq. (<ref>).
Here we abstain for simplicity from propagating the errors of the coefficients.
Instead we run the integration with the results for the coefficients
a_i, eq. (<ref>), of four different fits.
The spread of the results serves as rough estimate of the error.
Let us first check the consistency with the results given in table
<ref>. Let us consider (λ_1,μ_1)=(3.8,-1.2) and
(λ_2,μ_2)=(4.2,-1.1)
as example, where U_C= 0.179833(39) and 0.139891(65)
for L=64, respectively. Running eq. (<ref>) with the
coefficients obtained from four different fits, we arrive at the estimate
of the scale factor c=4.70(12), which can be compared with
c=1.33(1) × 3.35(4) = 4.46(9) taken from table <ref>.
Next we computed the scale factor c between (λ_1,μ_2)=(3.8,-1.2)
and (λ_2,μ_2)=(4.5,-0.8), (4.7,-0.6), and (5.0,-0.3) by
using eq. (<ref>) as examples.
We get c=562.(26.), 152000.(17000.), and 3.9(1.0) × 10^13,
respectively. Hence the correlation length in the high temperature phase
at the transition temperature should be ξ_high=1960000.(90000.),
5.5(6) × 10^8, and 1.3(3) × 10^17, respectively.
Note that the estimates of the error are only rough ones. Still the order
of magnitude of ξ_high should be correct. It is apparent that the
range of parameters, where the first order transition is very weak,
is large.
§.§ effective exponent ν
In ref. <cit.> the authors
suggest that for a weak first order transition, for a large range of
reduced temperatures, the behavior
of the correlation length is similar to that at a second order phase
transition, where however the exponent ν of the O(3)-invariant
Heisenberg universality class is replaced by an effective one that
depends weakly on the reduced temperature. Here we analyze the
finite size scaling behavior of the slope of dimensionless quantities and the
behavior of the infinite volume correlation length in the high temperature
phase.
§.§.§ Finite size scaling
We analyze the slopes S_i of dimensionless quantities
R̃_i = R_i - ∑_j=2^m c_i,j U_C^j
at β_f, where S_i=∂R̃_i/∂β.
The idea is that R̃_i stays approximately constant
with increasing L at the transition temperature, and that this hopefully
also improves the behavior of the slope S_i. We redo the
analysis of section VII D of ref. <cit.> with new data added.
Note that in ref. <cit.> we have used by mistake the wrong sign for
the improvement term
∑_j=2^m c_i,jU_C^j. Here we compare final results
obtained by using different choices of c_i,j.
Since we are interested in the difference compared with the Heisenberg
universality class, we analyze ratios
r_S,i[(λ,μ), (λ_0,0), L] =
S_λ,μ,i(L)/S_λ_0,μ=0,i(L) ,
where i indicates which dimensionless quantity is taken and
λ_0=5.2 or 5.0.
We expect that subleading corrections approximately cancel.
Therefore we analyze the ratio with the simple Ansatz
r_S,i[(λ,μ), (λ_0,0), L] = a L^Δ y_t .
We performed fits for a number of values of (λ,μ) using a minimal
lattice size L_min=16 or 24 that is taken into account.
In Fig. <ref> we plot Δ y_t obtained by using L_min=24
and λ_0=5.2 as a function of U_C. The values of c_i,j
are taken from table <ref> and m=4. Here β_f is obtained from
fixing Z_a/Z_p-∑_j=2^m c_i,j U_C^j=0.19477 using the values of
c_i,j given in table <ref>.
As argument of Δ y_t we take
[U_C(L_max) + U_C(L_min)]/2, where L_max
and L_min are the largest and the smallest lattice size taken into
account in the fit.
We have analyzed the estimates of Δ y_t by using the Ansätze
Δ y_t = b U_C^2 + c U_C^3
and
Δ y_t = b U_C^2 + c U_C^3 + d U_C^4
for the three different dimensionless quantities with different choices
for ∑_j=2^m c_i,jU_C^j.
For c_i,j=0, already the estimate of b depends on the dimensionless
quantity that is considered. For example from a fit with L_min=16 and
λ_0=5.2 we get b=3.94(3), 4.21(3), and 3.55(5) for Z_a/Z_p,
ξ_2nd/L, and U_4, respectively. In all three cases we used the
Ansatz (<ref>) and data for
-0.06 ⪅U_C ⪅ 0.06.
We added successively higher orders of U_C to R̃_i.
We could not identify a clean convergence pattern for the coefficients of
eqs. (<ref>,<ref>).
For the choice of c_i,j used for the data given in Fig. <ref>,
we find that the results for b are more or less the same for the three
different dimensionless quantities. We quote
b = 3.8(2)
as our final result. For c, the results depend clearly on the
dimensionless quantity that is considered. Certainly deeper theoretical insight
is needed to decide whether a unique effective exponent ν_eff can be
obtained from finite size scaling.
Note that at the cubic fixed point, for finite m and any choice of
c_i,j, one should get in the limit L→∞ a unique value for
Δ y_t, not depending on the choice of the dimensionless quantity.
Indeed, analyzing various choices ∑_j=2^m c_i,jU_C^j,
we get similar numerical estimates for Δ y_t at the cubic fixed point
for Z_a/Z_p, ξ_2nd/L, and U_4.
In particular we confirm the numerical results of Sec. VII D of
ref. <cit.>.
§.§.§ Correlation length in the high temperature phase
Here we have simulated the model for the two selected values
(λ,μ)=(4.5,-0.8) and (3.8,-1.2)
in the high temperature phase, where the correlation length can be
determined very accurately by using the improved estimator of the
correlation function that comes with the single cluster algorithm
<cit.>.
For comparison we study λ=5.0 and 5.2 at μ=0.
As estimate of the correlation length we take the effective correlation length
ξ_eff(t) = -log(G(t+1)/G(t)) ,
where G(t) = ⟨S⃗(0) ·S⃗(t) ⟩ and
S⃗(x_0) = ∑_x_1,x_2ϕ⃗_x_0,x_1,x_2 .
Computing G(t), we summed over all translations and all three
directions on the lattice.
In the numerical analysis we improved eq. (<ref>) by
taking into account periodic boundary conditions
G(τ) =c [ exp(-τ/ξ_eff) + exp(-(L-τ)/ξ_eff) ] .
Eq. (<ref>), for τ=t and τ=t+1, is solved for
ξ_eff numerically.
It turns out that ξ_eff(t) is rapidly converging with increasing
distance t. As our final estimate we take ξ_eff(t) at
t=2 ξ_eff(t), selfconsistently. We take a linear lattice size
L ≈ 20 ξ_eff. We checked that for this lattice size finite
size effects are clearly negligible.
We performed simulations for a range of β such that ξ≈ 2
for the smallest value of β and ξ≈ 10 for the largest.
We simulated at 26, 29, 34, and 36 different values of β for
(λ,μ)=(3.8,-1.2), (4.5,-0.8), (5.0,0), and (5.2,0),
respectively.
We performed at least 10^5 update cycles for each simulation.
The update cycle consist of local Metropolis updates and single cluster
updates. We performed roughly as many single cluster updates, such that,
on average, the volume of the lattice is covered.
Assuming that the models are improved, we fitted our data with the simple
Ansatz
ξ=a t^-ν_eff (1 + b t) ,
where we have included leading analytic corrections. Our definition of the
reduced temperature is t=β_t-β. In a way, along with the
range of β that is taken into account in the fit, this defines an
effective value of the correlation length exponent ν.
We took the estimate of β_t from the finite size scaling analysis
discussed above. The parameters of the fit are a, b, and ν_eff.
Fitting all our data for (λ,μ)=(5.2,0) we get χ^2/DOF=1.237
corresponding to p=0.164 and ν_eff=0.71045(8), which is slightly
too small compared with ν = 0.71164(10) <cit.> or
ν = 0.71169(30) <cit.>.
Discarding small values of β, the fit improves and the value of
ν increases. For example taking β=0.65 with ξ=3.25846(24)
as minimal value, we get χ^2/DOF=1.005 corresponding to p=0.455
and ν_eff=0.71093(22). The small deviation of ν_eff from the
estimates of refs. <cit.> can be attributed to corrections
not taken into account in the Ansatz (<ref>). Analyzing the data
for (λ,μ)=(5.0,0) we get χ^2/DOF=0.999
corresponding to p=0.467 and ν_eff=0.71024(8). Also in this case,
χ^2/DOF decreases, when discarding small values of β and the
value of ν_eff slightly increases. For example for the minimal
value β=0.65 with ξ=3.27436(28) we get χ^2/DOF=0.680,
p=0.870 and ν_eff=0.71095(24).
In summary: Using the simple Ansatz (<ref>) taking data for
ξ≈ 3.3 up to ξ≈ 10 we obtain an estimate of
ν that deviates from the most accurate values for the Heisenberg
universality class, given in the literature, in the fourth digit.
Now let us turn to the data for μ < 0.
For (λ,μ)=(4.5,-0.8) we get χ^2/DOF=1.550,
p=0.0365 taking into account all data. We get ν_eff=0.70235(9).
The quality of the fit does not improve discarding data. We note that
our numerical estimates of ξ are very accurate and less accurate
data might result in an acceptable fit. The estimate of ν_eff is
clearly smaller than those obtained for μ=0. The deviation is about
1 %.
Finally we analyzed our data for (λ,μ)=(3.8,-1.2).
Fitting all data
we get χ^2/DOF=5.054, p=0.000, and ν_eff=0.68382(8).
Here, discarding data, keeping β=0.64 , ξ=3.61272(27) as smallest
value of β we get χ^2/DOF=1.309, p=0.199 and
ν_eff=0.68176(27), which is clearly smaller than the O(3) invariant
value.
Fitting all data up to β=0.648, ξ=4.50359(34), we get
χ^2/DOF=0.692, p=0.733 and ν_eff=0.68519(28). We notice that
the value of ν_eff decreases, decreasing the reduced temperature t.
In order to compare the result obtained here with tht obtained from finite
size scaling, we take U_C as defined in Sec. <ref>
for L=8 at (λ,μ)=(4.5,-0.8) and (3.8,-1.2).
We estimate U_C ≈ 0.0678 and 0.1177, respectively.
Hence we get y_t,eff≈ 1.4052 + 3.8 U_C^2
≈ 1.4227 and 1.4578, corresponding to ν_eff≈ 0.7029
and 0.6859, respectively. These numbers are in reasonable agreement
with the results obtained from the correlation length in the high
temperature phase.
§ FIRST ORDER PHASE TRANSITION
Here we discuss our simulations for values of (λ,μ), where
the first order transition is sufficiently strong such that it can be detected
directly in the analysis of the data generated in the simulation.
We performed extensive preliminary simulations to get an idea of the
range of (λ,μ), where this is the case.
A first indication of a first order transition is the appearance of
metastabilities in standard simulations. Furthermore,
it is useful to study the histograms of various observables. At first
order transitions, double peak structures appear. These double peaks
become sharper as the linear lattice size increases. The separation
of the peaks is accompanied by an exponential increase of the autocorrelation
time with increasing lattice size, when using standard algorithms.
Below, we briefly discuss our implementation of the multihistogram method
<cit.> that at least mitigates the problem of the increasing
autocorrelation time. For more detailed discussions and alternatives to
the multihistogram method see for example refs.
<cit.>.
Then we discuss our numerical results for the transition temperatures,
the interface tension, the latent heat and the correlation length in the
high temperature phase at the transition. The theoretical basis for the
finite size scaling analysis of first order phase transitions is provided
by refs. <cit.>.
§.§ multihistogram method
In order to perform simulations for lattices with L ≫ξ_high
at the transition temperature, we employed the multihistogram method
<cit.>. In standard simulations, using a local algorithm,
configurations can be changed only in small steps. Hence going from
the disordered to an ordered phase and vice versa, the Markov
chain has to pass configurations, where both phases are present, separated
by interfaces. These configurations are highly suppressed and their weight
is decreasing exponentially with the area of the interfaces. Therefore,
in the simulation these configurations are rarely visited and hence
tunneling times between the phases become larger and larger as the
lattice size increases.
The basic idea of the multihistogram method is to simulate
a modified distribution such that configurations that contain two phases
have an enhanced probability compared with the Boltzmann distribution.
Configurations {ϕ⃗} are generated with a probability distribution
P({ϕ⃗}) = 1/∑_{ϕ⃗}exp(-H[{ϕ⃗}])
W(X[{ϕ⃗}]) exp(-H[{ϕ⃗}]) W(X[{ϕ⃗}]) ,
where W(X[{ϕ⃗}]) is a real positive number and X[{ϕ⃗}] is an
estimator of an observable. In our simulations we took the
energy, eq. (<ref>), for this purpose.
Using the multihistogram method the problem of the increasing
tunneling time can be drastically reduced but not completely
eliminated. For a discussion see for example ref. <cit.>.
The expectation value of an estimator A[{ϕ⃗}] with respect to the
Boltzmann distribution is given by
⟨ A ⟩≈∑_i W^-1(X[{ϕ⃗}_i]) A[{ϕ⃗}_i]/∑_i W^-1(X[{ϕ⃗}_i]) ,
where we sum over the configurations that are generated after
equilibration.
The function W(X) should be constructed such that the histogram
becomes essentially flat between the maxima of the Boltzmann distribution.
We construct W(X) as a piecewise constant function:
W(X) = {[ 1 X<X_0; w_i X_0+ i Δ≤ X <
(i+1) Δ; 1 X_1 < X , ].
where i ∈{0, 1, ..., M-1 } and Δ=(X_1-X_0)/M. X_0 and X_1
roughly give the position
of the peaks in the histogram. In our simulations 10 ≤ M ≤ 600.
The weights w_i are computed from the histogram. They can
be iteratively improved by using more and more accurate data
for the histogram. For lattice sizes that are not too large,
one gets a few tunnelings between the phases by simulating with the
Boltzmann distribution and one can use these simulations as starting
point for the iterative determination of W(X).
In case one has a reasonable Ansatz for the histogram of X as a function
of the linear lattice size L, one might increment L in small steps.
A first guess for W(X) might be obtained by extrapolating the results
obtained for the lattice sizes simulated before.
Here we did not succeed with such a strategy. Instead, we proceed
without using the knowledge obtained from the simulation of smaller
lattice sizes:
We started with two simulations taking W(X)=1 for all X. These
simulations are started with configurations that are in the domain of
the disordered and the ordered phase, respectively.
For the disordered phase we take
ϕ_x,i = -0.5
for all sites x and components i, where is a uniformly
distributed random number in the interval [0,1).
In the case of the ordered phase we take
ϕ_x,0 = Φ_0 + -0.5
and
ϕ_x,i = -0.5
for i>0, where Φ_0 is a rough approximation of the expectation
value of the field in the ordered phase.
For sufficiently large lattice sizes L, the probability that the
simulation switches the phase during the simulation is virtually
vanishing.
We compute the histograms of X for these two simulations.
We chose X_0 as the position of the maximum of the histogram
of the disordered simulation and X_1 as the position of the
maximum of the histogram of the ordered simulation. Typically we
get reasonable statistics only up to X_0+ϵ_0 and
down to X_1-ϵ_1. In the middle, there is a gap without any
configuration generated. We compute W(X) up to X_0+ϵ_0 and
and down to X_1-ϵ_1 straight forwardly from the histogram.
The gap between X_0+ϵ_0 and X_1-ϵ_1 is filled
by linear interpolation. We also experimented with guessing somewhat
larger values of W(X) in the gap to speed up the convergence.
We iterated this step until the gap has closed. Then we proceeded
as above.
We performed the simulations using a hybrid of local Metropolis, local
overrelaxation and wall cluster <cit.> updates.
The weight W is integrated in the
accept/reject step of the local algorithms in the straight forward
way. In the case of the wall cluster algorithm, the cluster is
constructed following the same rules as for the plain Boltzmann
distribution. The update of the wall cluster is viewed as a proposal
of a Metropolis step, where the accept/reject step takes into account
the change of W caused by the wall cluster update:
P_acc = [1,W(X[{ϕ⃗}'])/W(X[{ϕ⃗}])] ,
where {ϕ⃗}' is the configuration that results from the wall cluster
update of {ϕ⃗}.
§.§ Simulations at the first order transition
Based on our preliminary studies we focussed on simulations for the five pairs
of parameters: (λ,μ)=(1.24,-2.3), (1.675,-1.95), (2.0,-1.85),
(3,-2.5), and
(2.333,-1.764). These values were selected such that the correlation
length in the high temperature phase, at the transition temperature is about
ξ_high≈ 2, 6, 12, 12, and 24, respectively.
The smaller ξ_high, the stronger is the first order transition.
First, for all pairs of parameters,
we performed simulations with the program used in ref. <cit.>,
generating configurations following the Boltzmann distribution.
It can be used as long as the tunneling times between the phases are
not too large. We performed such
simulations using the linear lattice size L=8 for
(λ,μ)=(1.24,-2.3),
L=12, 16, and 24 for
(λ,μ)=(1.675,-1.95),
L=12, 16, 20, 24, 32, 40, and 48
for (λ,μ)=(2,-1.85), L=12, 16, 20, 24, and 32 for
(3,-2.5), and L=12, 16, 20, 24, 32, 40,
48, and 64 for (λ,μ)=(2.333,-1.764).
We extracted a preliminary estimate of the transition temperature by
requiring that Z_a/Z_p=1/7.
Larger lattice sizes were simulated by using the multicanonical method
as discussed above. We started the detailed study of the transition for
(λ,μ)=(1.24,-2.3), where we simulated the linear
lattice sizes L=8, 12, 16, 24, 28, 32, and 40.
In our program no parallelization is implemented. We employed
trivial parallelization at a moderate level: In the case of
L=40 we performed 5 independent simulations with ordered
and 5 independent simulations with disordered start configurations
in parallel.
In a series of preliminary runs, as discussed above, we determine
the weight function W(X) for the multicanonical simulation.
The results given below are based on simulations using our final
estimate of the weight function. Even when using the multicanonical
simulation, autocorrelation times increase rapidly with increasing
lattice size.
In particular in the case of
larger lattice sizes one has to find a reasonable compromise, when
discarding configurations for equilibration. We took
t_dis≈ 10 τ_ene, where τ_ene is the integrated
autocorrelation time of the energy. We inspected the history
of our simulations by plotting the expectation values of the energy
or the magnetic susceptibility versus the iteration number of the
Markov chain. We find that with this choice of t_dis
a few tunnelings from disorder to order and vice versa are
discarded. Errors are computed by Jackknife binning with
N_bin=20.
The simulations were performed using a value of β slightly
smaller than the preliminary estimate of β_t available
when starting the simulation, giving more weight to the disordered
phase.
For L=40, we performed for each measurement 30 sweeps with a
local update algorithm and 18 wall cluster updates. With our final
version of W(X), we performed 5.5 × 10^7 measurements after
equilibration. These simulations took about 120 days on
a single core of an AMD EPYC^TM 7351P CPU. The integrated autocorrelation
time of the energy is about τ_ene≈ 80000 in units of measurements.
First we computed the inverse β_t of the transition temperature.
To this end, we determined the location E_min of the minimum of
the histogram of the energy density, reweighted to the Boltzmann distribution
for a preliminary estimate of β_t. Then the estimate of β_t is
computed by requiring that the total weight of configurations with
E ≥ E_min is 2 N=6 times as large as that for E < E_min.
Since the probability density in the neighborhood of E_min is very small,
the estimate of β_t is not very sensitive to the exact choice of
E_min. Preliminary analysis shows that replacing the energy
by, for example, the square of the magnetization leads
to virtually identical results. Our estimates of β_t are summarized
in table <ref>.
One expects that β_t is converging exponentially fast with increasing
lattice size <cit.>. In fact, all estimates obtained for
L ≥ 12 are consistent among each other.
As final result we take β_t=0.3294108(5),
obtained for our largest linear lattice size L=40.
For L=8, simulating the
Boltzmann distribution and requiring that Z_a/Z_p=1/7 we get
β_t=0.329405(9), which is compatible with our final result.
In Fig. <ref> we plot the histograms for the
energy density for the Boltzmann distribution at (λ,μ)=(1.24,-2.3),
β=0.3294108 and the lattice sizes we have simulated.
Histograms at the first order transition can be understood starting from
an effective description
of the configuration space. There are regions on the lattice that can
be assigned to one of the phases. At the transition, all phases have the
same free energy density. These regions are separated by interfaces,
which are characterized by their interface tension σ. The
weight of these sets of configurations is given by the free energy of the
interfaces.
An observable, for example the energy density, takes a certain value
for each of the phases. There is a characteristic variance
of the observable for each of the phases. The peaks in the histogram
are associated with configurations, where only one phase is present.
To understand the histogram between the peaks we have to consider
configurations, where two phases, the disordered phase and one of the
ordered phases are present. One has to consider configurations,
which are predominantly associated with one of the phases, and there
is a droplet of the other phase. Furthermore, on a L^3 lattice with
periodic boundary conditions, for large L, the minimum in the histogram
is related with configurations, where the phases are separated by two flat
interfaces with the area L^2.
The reduced free energy of a single interfaces is
F_I = σ L^2 + c ,
where the constant c takes into account fluctuations of the interface.
Since F_I does not depend on the distance between the interfaces,
the histogram becomes flat at the minimum.
Taking into account the translational invariance we get, up to a constant
prefactor,
z_2I(L) = exp(2 log L - 2 F_I(L))
as weight for the collection
of configurations, where two phases are separated by two flat
interfaces.
We determine z_2I up to a constant prefactor by the value of the
histogram at its minimum. Our numerical results for
2 F_I(L) + C=-log(z_2I(L))+2 log L
are summarized in table <ref>.
In order to determine the interface tension, we take two lattice sizes
L_1 and L_2:
σ = F_I(L_2) - F_I(L_1)/L_2^2 - L_1^2 .
We arrive at
σ=
0.0449(5),
0.0523(3),
0.0579(1),
0.0566(1), and
0.0548(1) for (L_1,L_2)=(8, 12), (12, 16), (16, 24), (24, 32), and
(32, 40). The histograms plotted in Fig. <ref> show only a clean
plateau value between the peaks for L=40 and to a reasonable approximation
for L=32. Therefore we take our estimate obtained for (L_1,L_2)=(32, 40)
as final result.
We performed simulations by using the multicanonical method for
weaker first order phase transitions in a similar
fashion as for (λ,μ)=(1.24,-2.3). The largest lattice
sizes that we have reached are L_max=64, 96, 64, and 128 for
(λ,μ)=(1.675,-1.95), (2.0, -1.85), (3.0,-2.5), and
(2.333,-1.764), respectively. Our final results for β_t and
σ are given in table <ref>.
Here our largest L/ξ_high are smaller than for
(λ,μ)=(1.24,-2.3).
Therefore, systematic errors computing σ by using
eq. (<ref>) are present. We corrected for that by taking
into account the dependence of the estimate of σ seen for
(λ,μ)=(1.24,-2.3) as a function of L/ξ_high.
§.§.§ Correlation length
In order to compute the correlation length in the disordered phase
at the transition temperature, we simulated the model by using the same program
as above in Sec. <ref>. The simulations are started with
ϕ_x,i= -0.5 for all sites x and components i.
The linear lattice size L is chosen such that
the tunneling time to an ordered phase is very large compared with the
length of the simulation. To this end, we use L ≈ 20 ξ_high,
selfconsistently. The correlation length is determined as
discussed above in Sec. <ref>. Our final results are summarized
in table <ref>. The errors quoted take the uncertainty
of the inverse transition temperature β_t into account.
Computing the correlation length for the ordered phases turns out
to be considerably more difficult. Here, the connected part of the
correlation function has to be computed. The improved estimator
proposed for models with Z_2 symmetry <cit.> can not be applied.
Furthermore, the effective correlation length converges more slowly
than in the disordered phase. Therefore we computed the
correlation length for the ordered phases only for
(λ,μ)=(1.24,-2.3). We get the rough estimate ξ_low=1.6(1).
§.§.§ Latent heat
We define the latent heat as
Q = 1/L^3(⟨ H⟩_disorder -
⟨ H⟩_order)
taken at the transition temperature β_t. For the
disordered phase, the measurements are taken
from a subset of the simulations done for the correlation length discussed
above. For the ordered phases, we performed simulations with the same
lattice sizes as for the ordered phase. The simulations are started
with a configuration generated by using eqs. (<ref>,<ref>).
Our results are given in table <ref>.
§.§.§ Scaling of the interface tension and the latent heat
As the first order phase transition becomes weaker, the interface tension
and the latent heat decrease. The combination σξ_high^2 should
have a finite limit as the O(3)-invariant fixed point is approached. In
fact we find σξ_high^2 = 0.216(4), 0.267(15), 0.280(15),
and 0.28(3) for (λ,μ)=(1.24,-2.3), (1.675,-1.95), (2.0, -1.85),
and (2.333,-1.764), respectively. As our estimate for the scaling limit,
we quote σξ_high^2 = 0.28(3).
In the case of the latent heat we expect from dimensional analysis that
Q ξ_high^d-y_t = Q ξ_high^Δ_ϵ ,
where y_t is the thermal RG-exponent of the Heisenberg
universality class, approaches a finite limit as the
Heisenberg fixed point is approached.
Taking y_t=1.4052(2) <cit.>,
we get Q ξ_high^Δ_ϵ = 3.580(4), 5.324(9),
6.256(6), 6.494(18), and 7.017(27)
for (λ,μ)=(1.24,-2.3), (1.675,-1.95), (2.0, -1.85), (3.0, -2.5)
and (2.333,-1.764), respectively.
Here the convergence is not as convincing as for the interface tension.
We abstain from quoting a result for the limit ξ_high→∞.
§ SUMMARY AND CONCLUSION
We have studied the three component ϕ^4 model on the simple cubic lattice
with a cubic perturbation. Compared with ref. <cit.> we have extended
the study of the RG flow towards stronger breaking of the O(3) invariance.
In a first step, we identify the line of slow flow in the parameters
(λ,μ)
of the reduced Hamiltonian, eq. (<ref>). Here we obtain a more
accurate characterization than in ref. <cit.>. Next we study the
slow flow. To this end, we analyse the scaling behavior of the
dimensionless quantity U_C that quantifies the violation of the O(3)
symmetry. Essentially, we confirm the estimates obtained in
ref. <cit.>. One should note that the range of |U_C| is
complementary to that of ref. <cit.> and the details of the analysis
differ. Therefore the analysis performed here provides a valuable cross-check.
The analysis provides us with a accurate estimate of the difference
Y_4-ω_2=0.00081(7), where Y_4 is the RG-exponent of the cubic
perturbation at the O(3) symmetric fixed point and ω_2 the correction
exponent at the cubic fixed point. We analyse the behavior of the correlation
length in the high temperature phase for two values of (λ,μ) with
μ<0. As suggested in ref. <cit.>, we determine an effective
exponent ν_eff of the correlation length. Our results are in rough
agreement with those obtained in Sec. VII.D. of ref. <cit.>, where
we compute effective exponents from finite size scaling.
In the second part of the study we focus on the first order phase transition.
For a strong breaking of the O(3) invariance we clearly confirm the
first order nature of the transition. Histograms of various observables
show a clear double peak structure. The separation of the two peaks becomes
stronger with increasing lattice size. We obtain accurate estimates of the
latent heat, the correlation length in the disordered phase at the transition,
and the interface tension of interfaces between the disordered and one of the
ordered phases. We analyze how these quantities scale with the RG-flow.
§ ACKNOWLEDGEMENT
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under
the grant HA 3150/5-3.
99
PeVi02
A. Pelissetto and E. Vicari,
Critical Phenomena and Renormalization-Group Theory,
[arXiv:cond-mat/0012164],
Phys. Rept. 368, 549 (2002).
Rong23
J. Rong, Scalar CFTs from structural phase transitions, [arXiv:2303.12028].
epsilon6
Loran Ts. Adzhemyan, Ella V. Ivanova, Mikhail V. Kompaniets, Andrey Kudlis,
and Aleksandr I. Sokolov,
Six-loop ϵ-expansion study of three-dimensional n-vector model
with cubic anisotropy, [arXiv:1901.02754],
Nucl. Phys. B 940, 332 (2019).
BeHeKo23
Alexander Bednyakov, Johan Henriksson, and Stefanos R. Kousvos,
Anomalous Dimensions in Hypercubic Theories,
[arXiv:2304.06755].
O234
M. Hasenbusch and E. Vicari,
Anisotropic perturbations in three-dimensional O(N)-symmetric vector
models,
[arXiv:1108.0491], Phys. Rev. B 84, 125136 (2011).
myCubic
M. Hasenbusch, Cubic fixed point in three dimensions:
Monte Carlo simulations of the model on the simple cubic lattice,
[arXiv:2211.16170], Phys. Rev. B 107, 024409 (2023).
CB_O3
Shai M. Chester, Walter Landry, Junyu Liu, David Poland, David Simmons-Duffin,
Ning Su, and Alessandro Vichi,
Bootstrapping Heisenberg Magnets and their Cubic Instability,
[arXiv:2011.14647], Phys. Rev. D 104, 105013 (2021).
ourHeisen
M. Campostrini, M. Hasenbusch, A. Pelissetto, P. Rossi, and E. Vicari,
Critical Exponents and Equation of State of the Three-Dimensional
Heisenberg Universality Class,
[arXiv:cond-mat/0110336], Phys. Rev. B 65, 144520 (2002).
myPhi4
M. Hasenbusch,
A Monte Carlo study of leading order scaling corrections
of ϕ^4 theory on a three dimensional lattice,
[hep-lat/9902026], J. Phys. A 32, 4851 (1999).
XY1
M. Campostrini, M. Hasenbusch, A. Pelissetto, P. Rossi, and
E. Vicari,
Critical behavior of the three-dimensional XY universality class,
[arXiv:cond-mat/0010360], Phys. Rev. B 63, 214503 (2001).
myIso
M. Hasenbusch,
Restoring isotropy in a three-dimensional lattice model:
The Ising universality class,
[arXiv:2105.09781], Phys. Rev. B 104, 014426 (2021).
running
Martin Lüscher, Peter Weisz, and Ulli Wolff,
A Numerical method to compute the running coupling in asymptotically
free theories, Nucl. Phys. B 359, 221 (1991).
myIco
M. Hasenbusch,
Monte Carlo study of a generalized icosahedral model on the simple cubic
lattice, [arXiv:2005.04448], Phys. Rev. B 102, 024406 (2020).
Sak74
J. Sak, Critical behavior of compressible magnets,
Phys. Rev. B 10, 3957 (1974).
Carmona
J. M. Carmona, A. Pelissetto, and E. Vicari,
The N-component Ginzburg-Landau Hamiltonian with cubic anisotropy:
A Six loop study, [arXiv:cond-mat/9912115],
Phys. Rev. B 61, 15136 (2000).
Kos:2016ysd
F. Kos, D. Poland, D. Simmons-Duffin, and A. Vichi,
Precision Islands in the Ising and O(N) Models,
[arXiv:1603.04436], J. High Energ. Phys. (2016) 036.
AharonyNeu
A. Aharony, O. Entin-Wohlman and A. Kudlis,
Different critical behaviors in cubic to trigonal and tetragonal
perovskites, [arXiv:2201.08252], Phys. Rev. B 105, 104101 (2022).
BeNe92
B.A. Berg, and T. Neuhaus,
Multicanonical ensemble: A new approach to simulate first-order phase
transitions, [arXiv:hep-lat/9202004], Phys. Rev. Lett. 68, 9 (1992).
BeNe91],
B.A. Berg, T. Neuhaus,
Multicanonical algorithms for first order phase transitions,
Physics Letters B 267, 249 1991.
WaLa01
F. Wang and D.P. Landau,
Efficient, multiple-range random walk algorithm to calculate the
density of states,
[arXiv:cond-mat/0011174],
Phys. Rev. Lett. 86, 2050 (2001).
WaLa01E
F. Wang and D.P. Landau,
Determining the density of states for classical statistical models:
A random walk algorithm to produce a flat histogram,
[arXiv:cond-mat/0107006],
Phys. Rev. E 64, 056101 (2001).
EaDe05
David J. Earlab and Michael W. Deem,
Parallel tempering: Theory, applications, and new perspectives,
[arXiv:physics/0508111],
Phys. Chem. Chem. Phys. 7, 3910 (2005).
Ja98
W. Janke,
Multicanonical monte carlo simulations,
Physica A: Statistical Mechanics and its Applications 254,
164 (1998).
Ja02
W. Janke,
Histograms and All That,
In:
B. Dünweg, D.P. Landau, A.I. Milchev (eds),
Computer Simulations of Surfaces and Interfaces, NATO Science Series,
Vol. 114, Springer, Dordrecht (2003).
BoKo90
C. Borgs and R. Kotecký,
A rigorous theory of finite-size scaling at first-order phase transitions,
Journal of Statistical Physics 61, 79 (1990).
BoJa92
Christian Borgs and Wolfhard Janke,
New method to determine first-order transition points from finite-size
data,
Phys. Rev. Lett. 68, 1738 (1992).
Bauer10
Bela Bauer, Emanuel Gull, Simon Trebst, Matthias Troyer and David A Huse,
Optimized broad-histogram simulations for strong first-
order phase transitions: droplet transitions in the large-Q
Potts model, [arXiv:0912.1192],
J. Stat. Mech. (2010) P01020.
Wolff
U. Wolff,
Collective Monte Carlo Updating for Spin Systems,
Phys. Rev. Lett. 62, 361 (1989).
HaPiVi
M. Hasenbusch, K. Pinn, and S. Vinti,
Critical Exponents of the 3D Ising Universality Class From Finite Size Scaling
With Standard and Improved Actions,
[arXiv:hep-lat/9806012], Phys. Rev. B 59, 11471 (1999).
Ha16
M. Hasenbusch,
Variance-reduced estimator of the connected two-point function in the
presence of a broken Z_2-symmetry, [arXiv:1512.02491],
Phys. Rev. E 93, 032140 (2016).
|
http://arxiv.org/abs/2307.06272v1 | 20230712161637 | Exposing the Fake: Effective Diffusion-Generated Images Detection | [
"Ruipeng Ma",
"Jinhao Duan",
"Fei Kong",
"Xiaoshuang Shi",
"Kaidi Xu"
] | cs.CV | [
"cs.CV",
"cs.CR",
"cs.LG"
] |
[
Exposing the Fake: Effective Diffusion-Generated Images Detection
Ruipeng Mauestc
Jinhao Duandrexel
Fei Konguestc
Xiaoshuang Shiuestc
Kaidi Xudrexel
uestcDepartment of Computer Science and Engineering, University of Electronic Science and Technology of China
drexelDrexel University
Xiaoshuang [email protected]
Machine Learning, ICML
Machine Learning, ICML
0.3in
]
Image synthesis has seen significant advancements with the advent of diffusion-based generative models like Denoising Diffusion Probabilistic Models (DDPM) and text-to-image diffusion models. Despite their efficacy, there is a dearth of research dedicated to detecting diffusion-generated images, which could pose potential security and privacy risks. This paper addresses this gap by proposing a novel detection method called Stepwise Error for Diffusion-generated Image Detection (SeDID). Comprising statistical-based SeDID_Stat and neural network-based SeDID_NNs, SeDID exploits the unique attributes of diffusion models, namely deterministic reverse and deterministic denoising computation errors. Our evaluations demonstrate SeDID's superior performance over existing methods when applied to diffusion models. Thus, our work makes a pivotal contribution to distinguishing diffusion model-generated images, marking a significant step in the domain of artificial intelligence security.
§ INTRODUCTION
Generative diffusion models have made significant strides in the field of image generation, demonstrating remarkable capabilities <cit.>, but have also raised privacy and abuse concerns <cit.>.
Previous works <cit.> have laid the groundwork for detecting diffusion-generated images, and some have successfully leveraged the deterministic reverse and denoising processes inherent to diffusion models.
However, while detection methods such as DIRE <cit.> indeed leverage some deterministic aspects, they may not fully exploit the entirety of these features. In particular, DIRE concentrates on the reconstruction at the initial x_0 timestep, which may overlook the valuable information encapsulated in the intermediate steps throughout the diffusion and reverse diffusion processes. In contrast, the proposed SeDID exploits these intermediate steps, which could potentially enhance the detection efficacy. Additionally, we reveal that the distribution of real images could potentially diverge from the distribution of images generated by diffusion models, given the inherently complex and diverse characteristics of natural images. This indicates that the real-image distribution might not align perfectly with the regular patterns learned by the diffusion process.
Given these observations, we distinctly formulate our research question as follows:
Can we discriminate between real and diffusion-generated images by harnessing the inherent distributional disparities between naturally occurring and diffusion-synthesized visuals?
In our work, we address these issues by delving deeper into the deterministic reverse and denoising properties of diffusion models, proposing a novel and more encompassing detection approach. Our proposed method, the Stepwise Error for Diffusion-generated Image Detection (SeDID), is designed to comprehensively utilize these unique diffusion properties to improve detection performance, thereby presenting a more generalized and robust solution for detecting diffusion-generated images.
Our approach draws inspiration from SecMI <cit.>, a Membership Inference Attack (MIA) that differentiates training data and hold-out data on the assumption that the model overfits the training data. It's intuitive that a model better fits generation data than training samples is evident. Under this perception, we believe that the MIA-style method might be suitable for generation detection. Our method, which we have dubbed the Stepwise Error for Diffusion-generated Image Detection (SeDID), utilizes the error between the reverse sample and the denoise sample at a specific timestep T_SE.
Our major contributions in this paper can be summarized as:
* We propose SeDID, a novel detection scheme for diffusion-generated images. SeDID uniquely exploits the distinct properties of diffusion models, particularly focusing on the errors between reverse and denoise samples at specific timesteps during the generation process.
* We adapt insights from membership inference attacks to emphasize the distributional disparities between real and generated data. This perspective enhances our understanding of diffusion models' security and privacy implications and underpins the design of SeDID.
* We present an extensive empirical evaluation of SeDID on three distinct datasets. The results demonstrate SeDID's superior performance in detecting diffusion-generated images, surpassing existing methodologies.
The remainder of this paper is organized as follows: Section 2 discusses the related work; Section 3 elaborates on our proposed methodology, SeDID; Section 4 presents the comprehensive evaluation of SeDID and discusses the results; finally, Section 5 concludes the paper and provides directions for future research.
§ RELATED WORKS
§.§ Generative Diffusion Models
Diffusion models, introduced by Sohl-Dickstein et al. <cit.>, offer an approach distinct from Generative Adversarial Networks (GANs) <cit.>. These models gradually convert real data into noise and then learn to reverse this transformation. Ho et al. <cit.> enhanced this process, leading to a better approximation of the real data distribution. Such improvement has significantly influenced our work, particularly our emphasis on the reverse and denoising steps. This field's versatility is demonstrated by Kong et al. <cit.>, who employed diffusion models in audio synthesis, inspiring our method's adaptability.
Diffusion models have been broadly employed for accelerating inference <cit.> and conditional generation <cit.>. Several recent studies have addressed challenges such as improving inference speed and developing innovative methods <cit.>.
§.§ Diffusion-generated Image Detection
Research on image detection originated with a focus on black-box <cit.> and white-box attacks <cit.>, both primarily developed for classification models <cit.>. Black-box attacks assume limited knowledge about the model's internals, whereas white-box attacks presume complete model visibility. The research arena then widened to the detection of synthetic images, particularly those generated by diffusion models <cit.>. This evolution incorporated the examination of forensic traces in diffusion-generated synthetic images and the performance evaluation of GAN-dedicated detectors when applied to these images, even in challenging contexts involving image compression and resizing in social networks.
DIRE <cit.> represents a preliminary exploration in detecting diffusion-generated data, utilizing the reconstruction error of images using Denoising Diffusion Implicit Models (DDIM) <cit.> for inversion and reconstruction. While this investigation is unfolding, other strides have been made within the broader field of diffusion models. Architectural advancements have been achieved with ADM <cit.>, while PNDMs <cit.> have focused on accelerating the sampling speed. Furthermore, Stable Diffusion <cit.> v1 and v2 have delved into exploring downstream tasks.
<cit.> proposed the Step-wise Error Comparing Membership Inference (SecMI) approach for membership inference attack (MIA), leveraging the error comparison of the posterior estimation from the forward process. This concurrent work in the field inspired us in developing our current work, SeDID, which aims to detect diffusion-generated images effectively.
In summary, our SeDID method is a refined version of the concurrent work, SecMI, refocused specifically on detecting generated images, we adapt its technique to compute errors. This approach improves both the Area Under the Curve (AUC) and Accuracy (ACC) metrics in spotting diffusion-generated images, compared to methods solely relying on DDIM for image inversion and reconstruction.
§ METHODOLOGY
In this section, we detail our novel synthetic image detection method in diffusion models, namely SeDID, which builds upon the work of Duan <cit.> and the Diffusion Denoising Probabilistic Models (DDPM) <cit.>. We start by defining key notations and outlining the fundamental principles of DDPM.
§.§ Notations
We use standard notations as defined by Ho et al. (2020) <cit.>. We denote the real data distribution as q(x_0) and the latent variable model approximating q(x_0) as p_θ(x_0). The “noise-prediction” model, ϵ_θ, is parameterized by weights θ.
The diffusion model comprises a T-step diffusion process q(x_t|x_t-1) and a denoising process p_θ(x_t-1|x_t) for 1 ≤ t ≤ T:
q(x_t|x_t-1) = 𝒩(x_t; √(1 - β_t)x_t-1, β_t I),
p_θ(x_t-1|x_t) = 𝒩(x_t-1;μ_θ(x_t, t), Σ_θ(x_t, t)),
where x_t refers to the diffusion result at timestep t, β_t is the noise factor at timestep t, I is the identity matrix, μ_θ and Σ_θ are the mean and variance matrix of the denoising distribution respectively. The forward sampling at time step t is:
q(x_t|x_0) = 𝒩(x_t; √(α̅_t)x_0, (1 - α̅_t) I),
where α_t = 1 - β_t and α̅_t=∏_s=1^tα_s.
§.§ Definitions
In the context of our work, we primarily focus on the deterministic denoising function ψ_θ, and the deterministic reverse function ϕ_θ.We also introduce the notion of (t,δ-error), quantifying the posterior estimation error at timestep t under stepsize δ, and the Stepwise Error Calculation Time Step, T_SE.
The deterministic denoising function ψ_θ(x, t), following the denoising process from DDIM <cit.>, recovers the original data from the noised input x at timestep t. It is defined as:
ψ_θ(x, t, δ) = √(α̅_t-δ)f_θ(x, t) + √(1 - α̅_t-δ)ϵ_θ(x, t),
where ϵ_θ(x, t) represents the stochastic noise, α̅_t is the noise scale at timestep t and ψ_θ(x, t, δ)=x_t-δ. The definition of f_θ is (<ref>). We recover x by applying the formula recurrently. Generally, we can use δ > 1 to accelerate the denoising process.
f_θ(x, t) = x - √(1 - α̅_t)ϵ_θ(x, t)/√(α̅_t)
The deterministic reverse function ϕ_θ(x, t), following the reverse process from DDIM <cit.>, is the reversed process of denoising process. Given a sample x_t, we can leverage ϕ_θ(x, t) to obtain x_t+δ:
ϕ_θ(x, t, δ) = √(α̅_t+δ)f_θ(x, t) + √(1 - α̅_t+δ)ϵ_θ(x, t),
where ϕ_θ(x, t, δ)=x_t+δ.
The operations of ϕ_θ, ψ_θ, and f_θ are applied during the diffusion process at specific timesteps determined by the Stepwise Error Calculation Time Step, T_SE.
For a sample x_0 ∼ D and its deterministic reverse result x̃_t=ϕ_θ(ϕ_θ(…ϕ_θ(x_0, 0, δ), t-2δ, δ), t-δ, δ), we define (t,δ-error) as:
E_t,δ = || ψ_θ ( ϕ_θ(x̃_t, t, δ), t, δ) - x̃_t ||^2,
where ψ_θ is the deterministic denoising function and ϕ_θ is the deterministic reverse function.
In other word, we first use ϕ_θ to obtain the x̃_t+δ, and then, we use ψ_θ to get the reconstruction x̃'_t. E_t,δ is the difference beteen them. It is intuitive that the model is more familiar to the sample generated, so the sample with smaller E_t,δ will be more likely to be the sample generated.
T_SE is denoted to be the target timestep t to compute (t,δ-error), and is called Stepwise Error Calculation Timestep. We abuse δ to represent the selected stepsize, since it is not ambiguous.
§.§ Detailed Design of the SeDID Method
The SeDID method applies the concept of (t,δ-error) in combination with noise information during each diffusion process step, under the hypothesis that the process for real images differs from generated images. Experiments are conducted at various time steps T_SE, representing different stages in the diffusion process.
At each experimental time step T_SE, we calculate the (t,δ-error), E_T_SE,δ under stepsize δ:
E_T_SE,δ = || ψ_θ ( ϕ_θ(x̃_T_SE, T_SE), T_SE,δ) - x̃_T_SE ||^2,
where x_T_SE is the intermediate result from the diffusion process at the time step T_SE, and x̃_T_SE is the corresponding image from the denoising process at the same time step.
§.§ The Mechanics of the SeDID Method
SeDID has two variants: Statistical-Based Synthetic Image Detection (SeDID_Stat) and the Neural Networks(NNs)-Based Synthetic Image Detection (SeDID_NNs).
Statistical-Based Synthetic Image Detection – SeDID_Stat
We apply (t,δ-error) as the metrics to determine whether a sample is synthetic. If the error is smaller than a threshold h, we classify it to the synthetic sample. This can be described by:
g(x) = 1[E_T_SE,δ < h]
The AUC, best ACC, and TPR@FPR are computed based on these noise profiles.
Neural Network-Based Synthetic Image Detection – SeDID_NNs
SeDID_NNs extends the capabilities of SeDID_Stat by incorporating a neural network with a ResNet-18 architecture. The network is fed with the intermediate outcomes of both the diffusion and reverse diffusion processes at timestep T_SE. This allows the model to classify images as real or synthetic by using their respective noise profiles.
Our SeDID method leverages the strengths of statistical and machine learning approaches, accurately distinguishing real and synthetic images based on their noise profiles through a dual-phase strategy.
§ EXPERIMENT
In this section, we evaluate the performance of SeDID across various datasets and configurations.
§.§ Datasets
This study employed three publicly available datasets - CIFAR10 <cit.>, TinyImageNet <cit.>, and CelebA <cit.> - each presenting distinct complexities due to their unique characteristics.
CIFAR10: This dataset contains 60,000 color images, evenly distributed across 10 classes. The set is divided into 50,000 training images and 10,000 validation images.
TinyImageNet: This subset of the ILSVRC-2012 classification dataset features 200 categories, each with 500 training images, 50 validation images, and 50 synthetic images.
CelebA: This large-scale face attributes dataset comprises over 200,000 celebrity images, each annotated with 40 attribute labels. The varied size and complexity of the images in this dataset make it a challenging platform for generative models.
All images across all datasets in this work are preprocessed to a uniform size of 32 × 32 pixels.
§.§ Experimental Settings
Dataset Preparation
All datasets underwent standard preprocessing operations, including normalization with a mean of 0.5 and standard deviation of 0.5 for each color channel, as well as potential random horizontal flipping for data augmentation. The CelebA dataset was further preprocessed with center cropping to 140 pixels as an additional augmentation step.
Baseline
We employ the existing Method <cit.> as our baseline. This approach uses the DDPM's training loss for membership inference by comparing generated Gaussian noise to predicted noise.
Metrics
Performance assessment of our SeDID hinged on three pivotal metrics: Accuracy (ACC), Area Under the ROC Curve (AUC), and True Positive Rate at a fixed False Positive Rate (TPR@FPR). ACC accounts for the fraction of correct predictions, AUC delineates model's class discriminative ability with higher scores signifying better performance, and TPR@FPR is an indicator of model's sensitivity at a given false-positive rate.
Experimental Setup and Implementation Details
In this work, we used diffusion models with the settings from <cit.> and performed a diffusion process of T=1000 steps. The models were trained on the entire datasets with 1,200,000 steps for Tiny ImageNet and CelebA, and 800,000 steps for CIFAR10. We synthesized a balanced dataset of 50,000 diffusion-generated and 50,000 real images, selected from the training split of each dataset.
The SeDID_NNs was trained on 10% of this data, using a ResNet18 as the backbone. The training phase extended for 20 epochs at a learning rate of 0.001. To manage the learning dynamics, we employed Stochastic Gradient Descent (SGD) with a momentum of 0.9 and weight decay of 5e-4. The batch size was set to 128.
The computations were performed on an Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz and an NVIDIA GeForce RTX 3090 GPU, ensuring the reproducibility of our experiments.
§.§ Selection of Optimal T_SE and stepsize δ
In diffusion processes, the selection of optimal Stepwise Error Calculation Time Step, T_SE, and stepsize δ is critical, as they directly affect the (t,δ-error) - the difference between the expected and actual states of the synthetic image at time step t. These parameters are pivotal for optimizing the quality of synthetic image generation.
To determine the optimal T_SE and δ, we experimented extensively on CelebA, Tiny-ImageNet, and CIFAR10 datasets. Our aim, by exploring various combinations, was to achieve the best possible AUC and ACC scores.
Figure <ref> depicts the performance of varying δ values across the datasets with best T_SE. The optimal performance is achieved at δ=165 for all datasets, thereby validating our choice.
§.§ Experimental Results
This subsection provides a comprehensive discussion of our experimental results conducted on three diverse datasets: CIFAR10, TinyImageNet, and CelebA. A visual representation of SeDID's performance across various Stepwise Error Calculation Time Steps, T_SE, for the STAT strategy on these datasets is depicted in Figure <ref>.
As detailed in Table <ref>, SeDID consistently outperforms the existing method in detecting diffusion-generated images. This is demonstrated by its superior AUC and ACC values across all tested datasets. The results show the T_SE have significant impact to the performance. This observation underscores the necessity of selecting an optimal diffusion timestep to maximize detection performance and also highlights the potency of SeDID in effectively distinguishing diffusion-generated images.
Additionally, we illustrate the diverse performance capabilities of our proposed SeDID method with respect to a range of different timesteps T_SE in Figure <ref>. The results show as the value of T_SE progressively escalates, a distinct improvement in key performance metrics - TPR@FPR(1%) and TPR@FPR(0.1%) - is observed for the CIFAR10 and Tiny-ImageNet datasets. In stark contrast, these performance metrics exhibit a decreasing trend on the CelebA dataset with an increasing T_SE. This intricate behavior not only underscores the inherent adaptability of the SeDID method, capable of fine-tuning T_SE to match the unique attributes of various datasets, but also hints at the potential dataset-dependence of the optimal T_SE. This interpretation suggests that a meticulous, deliberate selection of T_SE might be a powerful strategy in significantly enhancing the performance of synthetic image detection tasks.
§.§ Comparison to Baselines
In the context of limited research specifically dedicated to the detection of images generated by diffusion models, we employ the existing method <cit.> as our benchmark. After evaluating our SeDID approach on the three datasets, the resulting AUC and ACC are presented in Table 2. It is evident that both SeDID_Stat and SeDID_NNs demonstrate superior performance over the existing method, particularly in terms of ACC with an average increase of over 30% observed for both SeDID variants. This underscores the effectiveness of SeDID, particularly when employing neural networks in the inference strategy, as indicated by the notably high performance of SeDID_NNs.Furthermore, SeDID_NNs displays an even more pronounced performance improvement, outperforming SeDID_Stat by over 7%. This demonstrates that training a neural network can significantly improve the performance in term of AUC and ACC.
§ CONCLUSION AND FUTURE WORK
In this paper, we have presented SeDID, a novel method for detecting diffusion-generated images. SeDID leverages unique attributes of diffusion models, specifically deterministic reverse and deterministic denoising errors, providing a powerful tool for image detection. Extensive experiments on three different datasets demonstrate SeDID's superior performance compared to existing approaches. This work contributes to the field of diffusion-generated image detection, laying the groundwork for future research. Looking forward, we intend to:
* Extend our approach to encompass other types of diffusion-based generation models, broadening its applicability.
* Investigate an automated mechanism for optimal timestep and T_SE selection, aiming to enhance the precision and effectiveness of detection.
* Explore the potential of a multi-step error computation approach in enhancing detection accuracy, leveraging our defined t-error concept.
* We plan to further investigate the influences on SeDID's performance across various datasets, specifically examining unusual trends in datasets like CelebA.
Through these explorations, we aspire to develop more secure and reliable systems capable of effectively identifying and neutralizing potential threats from the misuse of generative diffusion models.
§ APPENDIX
§ GENERALIZATION TO LATENT DIFFUSION MODEL (LDM)
For the Latent Diffusion Model (LDM), the calculation of t-error is similar to DDPM, except the intermediate latent variables v_t are in the latent space and the reverse process is conditioned by text embeddings.
Specifically, we denote by V the Variational Autoencoders (VAEs) utilized to encode the original images into the latent space, i.e., v_0 = V(x_0), x_0 ∼ D, and denote by C the text condition.
The diffusion process and the denoising process can be derived as:
q(v_t|v_t-1) = 𝒩(v_t; √(1 - β_t)v_t-1, β_t I)
p_θ(v_t-1|v_t) = 𝒩(v_t-1;μ_θ(v_t, t, C), Σ_θ(v_t, t)).
Then, the t-error can be rewritten as:
ℓ̃_t, v_0 = || ψ_θ ( ϕ_θ(ṽ_t, t, C), t, C) - ṽ_t ||^2,
where we reuse the symbols ϕ_θ and ψ_θ as the deterministic reverse and sampling regarding v_t:
v_t+1 = ϕ_θ(v_t, t, C)
= √(α̅_t+1)f_θ(v_t, t, C) + √(1 - α̅_t+1)ϵ_θ(v_t, t, C),
v_t-1 = ψ_θ(v_t, t, C)
= √(α̅_t-1)f_θ(v_t, t, C) + √(1 - α̅_t-1)ϵ_θ(v_t, t, C),
where
f_θ(v_t, t, C) = v_t - √(1 - α̅_t)ϵ_θ(v_t, t, C)/√(α̅_t).
§ ADOPTED DIFFUSION MODEL AND DATASETS
In our study, we employ CIFAE10, Tiny-ImageNet and CelebA on DDPM. The detailed settings are demonstrated in Table <ref>.
§ ASSESSING THE EFFECTIVENESS OF DEFENSIVE TRAINING IN DIFFUSION MODELS
We explore defensive training strategies for diffusion models and assess their effectiveness with the SeCID method. Specifically, we investigate aggressive regularization and data augmentation. The Figures <ref>, <ref>, and <ref> showcase samples from the resulting models.
Despite the defensive training efforts, the generated images are disappointingly vague and unrealistic, a consequence of the aggressive regularization and data augmentation. Moreover, the quality of reconstructed images from both the member and hold-out sets is significantly compromised, posing substantial challenges to the effectiveness of Membership Inference Attacks (MIA). The results underscore a fundamental trade-off between model performance and privacy protection in the context of diffusion models.
|
http://arxiv.org/abs/2307.04038v1 | 20230708195154 | Prediction of short stellar activity cycles using derived and established empirical relations between activity and rotation periods | [
"A. k. Althukair",
"D. Tsiklauri"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Vol.0 (20xx) No.0, 000–000
Department of Physics and Astronomy, School of Physical and Chemical Sciences, Queen Mary University of London,
Mile End Road, London, E1 4NS,
UK; [email protected], [email protected]
Physics Department, College of Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, PO Box 84428, Saudi Arabia
Received 20xx month day; accepted 20xx month day
In our previous work, we searched for super-flares on different types of stars, focusing on G-type dwarfs using entire Kepler data to study statistical properties of the occurrence rate of super-flares. The said study also considered how the statistics change with stellar rotation period, which in turn, had to be determined. Using such new data, as a by-product, we found 138 Kepler IDs of F and G types main sequence stars with rotation periods less than a day (P_ rot<1 d). On one hand, previous studies have revealed short activity cycles in F-type and G-type stars and the question investigated was whether or not short-term activity cycles are a common phenomenon in these stars. On the other hand, extensive studies exist which establish empirical connection between a star's activity cycle and rotation periods. In this study, we compile all available Kepler data with P_ rot<1 d and derive, as well as use plausible, established empirical relations between P_ cyc and P_ rot with the aim to provide predictions for very short 5.13≤ P_ cyc≤ 38.14 d cases in a tabular form. As a result, we invite others to measure P_ cyc using monitoring program of stellar activity (e.g. activity-related chromospheric emission S-index) or similar means for the Kepler IDs found in this study in order put to test the derived and/or established empirical relations between P_ cyc and P_ rot. We also propose an alternative method for measuring very short P_ cyc, using flare-detection algorithms applied to future space mission data.
Althukair & Tsiklauri
Prediction of short stellar activity cycles
Prediction of short stellar activity cycles using derived and established empirical relations between activity and rotation periods.
A. k. Althukair
1,2
D. Tsiklauri
1
=====================================================================================================================================
§ INTRODUCTION
The 11-year cycle of solar activity discovered by Schwabe in 1844 <cit.>, is a significant phenomenon in solar and stellar physics. The cycle is manifested by a periodic change in solar activity, including the appearance of sunspots and changes in the Sun's magnetic field
on this time-scale. Smoothed sunspot numbers have been widely used as a proxy for solar activity over the past four centuries <cit.>.
The idea of the sunspot number was first introduced by <cit.> in the mid-19th century, and it has since become a standard measure for quantifying solar activity. These numbers reveal that there are almost regular cycles of about 11 years, reflecting the Sun's magnetic activity.
During the course of a solar cycle, the Sun experiences alternating periods of strong and weak activity known as solar maximum and minimum <cit.>. As the solar cycle progresses, the magnetic field becomes more complex and twisted. This results in the emergence of sunspots, which are dark areas on the surface of the Sun with intense magnetic fields, vary in size and can last from days to several months <cit.>, decaying into bright areas called faculae formed by smaller magnetic concentrations <cit.>. During the active phase of the solar cycle (solar maximum), the number and size of sunspots increase and appear at the solar surface. At the same time, bright faculae also become more prominent. As the cycle progresses, the number of sunspots decreases, the overall brightness of the Sun remains relatively constant and the Sun enters its least active phase of the solar cycle (solar minimum). These dark and bright features on the Sun's surface contribute to the variability in the total solar irradiance (TSI) <cit.>. Therefore, the TSI data can capture the combined effects of the evolving dark and bright features during the solar cycle <cit.>.
Cyclic activity has been observed in stars other than the Sun through long-term brightness changes associated with increased occurrence of active regions on their surfaces or in their lower stellar atmospheres <cit.>. The Mount Wilson HK program, which started in 1966 and lasted until the end of the 20th century, was the first to conduct a systematic search for activity cycles in main sequence stars <cit.>. By analysing
chromospheric emission in the spectral lines of Ca II H&K, as the magnetic field connected to active regions on the surfaces of stars plays an important role in transporting energy into the chromosphere. This increased energy input into the chromosphere leads to enhanced chromospheric emission, which can be observed prominently in the cores of the Ca II H&K spectral lines <cit.>. The measure of the chromospheric emission strength is described by the Mount Wilson S-index <cit.> or by the quantity R^'_ HK <cit.>. <cit.> investigated the chromospheric activity levels in main-sequence F-G-K-M stars by measuring the chromospheric CaII H&K emission fluxes. They noted that these stars display varying degrees of chromospheric activity and observed a noticeable lack in the number of F-G stars displaying intermediate activity compared to both highly active and less active stars. They suggested that the absence of such stars could be attributed to a decline in chromospheric activity as the stars age. <cit.> examined the relationship between chromospheric activity, specifically the R^'_ HK activity index, and the Rossby number Ro = P_ rot/τ_ c for a sample of main-sequence stars of spectral type F or later. Where P_ rot is the rotational period of the star and τ_ c is a theoretically derived convective turnover time. They found a strong correlation between the R^'_ HK activity index and the Rossby number. However, in contrast to the findings of <cit.>, <cit.> did not find any signs of the "Vaughan-Preston gap". <cit.> investigated the empirical relation between rotation period P_ rot, spectral type, and activity cycle period P_ cyc for 13 slowly rotating main-sequence stars. They found that the cycle period is related to the rotation period by a power law: P_ cyc∝ P_ rot^ 1.25. This relationship can alternatively be expressed as
P_ cyc≈ Ro^1.25≈ (P_ rot/τ_ c)^1.25 <cit.>. For stars of spectral type G0-K5, <cit.> observed a pattern of variation in the rotation period and the measure of chromospheric activity (S-index). Their research revealed that the chromospheric activity levels were high in young stars with fast rotation periods. Chromospheric activity and rotation rates of stars in the intermediate age range were average. Alternatively, the chromospheric activity levels were low in old stars with slow rotation periods. This observation supports the existence of the Vaughan-Preston gap <cit.>, indicating that chromospheric activity and rotation change over time as the stars age. The relation between rotation periods and activity cycles of a sample of stars was investigated by <cit.>, who discovered a correlation between the two variables. In particular, they observed that stars with slower rotation periods exhibit longer activity cycles, while stars with faster rotation periods tend to have shorter activity cycles. According to <cit.>, the relation between rotation periods and cycle lengths is more evident for stars with shorter activity cycles. However, the association becomes less clear for longer cycle lengths when considering more recent findings on the time variability of solar cycles.
<cit.> investigated the behaviour and activity cycles of four fast-rotating late-type stars with (P_ rot≤ 0.5 days), highlighting the presence of 1-year cycles and the correlation between rotation rate and cycle length. <cit.> used the short-term Fourier transform, a time-frequency analysis method, to examine the light curves of 39 fast-rotating late-type active stars with rotation periods of less than one day. Nine of the selected stars showed indications of activity cycles with periods between 300 and 900 days. These cycles were inferred from the changing typical latitude of the starspots on the stellar surface and due to the differential rotation of the stellar surface, the observed rotation period of the stars varied over the activity cycle. This variation in the rotation period was attributed to the movement and evolution of starspots at different latitudes of the star. <cit.> used four years of Kepler data to determine the cyclic variations in the amplitude of the light curve and the rotation period of stars by analysing a sample of active stars and calculating the rotation period and variability amplitude for each star in each Kepler quarter. Then they searched for periodic variations in these time series using Lomb-Scargle periodograms and employed a false alarm probability (FAP) criterion for selection. The study's findings indicate that amplitude periodicities, associated with underlying activity cycles, are detected in 3203 stars with cycle periods ranging from 0.5 to 6 years and rotation periods ranging from 1 to 40 days. According to <cit.> analysis of new observations and previous data, the longer and shorter cycle periods closely match expectations based on the average activity levels and rotation periods, which indicates a connection between stellar activity and stellar rotation. <cit.> reported an activity cycle of 11.6 years in the F-type star τ Boo (HD 120136). However, the authors assigned a FAP "poor" grade to this finding. <cit.> detected an activity cycle with a duration of 122 days in their analysis of the S-index data of τ Boo. This short activity cycle periods suggest that τ Boo may exhibit variations on a relatively short timescale. <cit.> focused on exploring the presence of short-term activity cycles in F-type stars, specifically using S-index time series data obtained with the TIGRE telescope. They utilized the generalized Lomb-Scargle periodogram method to analyze the data and search for periodic variations with a maximum length of 2 years. Their sample of F-type stars identified four stars that exhibited cyclic variations with periods of less than a year. However, compared to solar-type stars with well-developed cyclic activity, the amplitude of these short-term cyclic variations in F-type stars was smaller. Based on their findings, <cit.> concluded that the activity behaviour among F-type stars differs from that of the Sun and cooler main sequence stars. By studying 44 main-sequence stars with confirmed activity cycles, and rotation periods, <cit.> examined the relation between the length of the activity cycle and the Rossby number (Ro). They used empirical turnover periods based on the B-V colour index to calculate Rossby numbers, from which they deduced an empirical relationship between the Rossby number and the cycle duration. The study showed linear behaviour in the double-logarithmic relationship between the Rossby number and cycle period. In addition, the relative convection zone depth was found to be correlated with cycle length and convective turnover time.
In paper I <cit.>, we looked for super-flares on different types of stars and focused on G-type dwarfs using entire Kepler data to study
various aspects of statistical properties of the occurrence rate of super-flares.
In paper II <cit.>, as a by-product, we found thirteen peculiar Kepler IDs that are Sun-like, slowly rotating with rotation periods of 24.5 to 44
days, and yet can produce a super-flare and six G-type and four M-type Kepler IDs with exceptionally large amplitude super-flares. As noted previously,
these detections defy our current understanding of stars and hence deserve a further investigation.
In this paper III, the last in this series, we use an empirical connection between a star's activity cycle and rotation periods for a sample of F and G main sequence stars with rotation periods of less than one day.
Here our aim is to provide predictions for very short activity cycle cases in a tabular form and to investigate in the future whether these short activity cycles are a common phenomenon in these stars or not. Section <ref> provides the target selection method. Section <ref> presents the method used in this work which includes the empirical connection relation between P_ cyc and P_ rot. The main findings of the study are presented in Section <ref>, and section <ref> concludes this work with our main conclusions.
§ RELATION BETWEEN ACTIVITY CYCLE AND ROTATION PERIOD
<cit.> model of the α–Ω dynamo introduced the concept of migratory dynamo waves, which play a crucial role in generating the observed solar cycle <cit.>. The α–effect, arising from the twisting of rising magnetic field tubes due to Coriolis forces, creates the poloidal magnetic field required for the next sunspot cycle. This effect is responsible for the reversal of magnetic polarities between successive cycles <cit.>. On the other hand, the Ω–effect, resulting from the differential rotation of the star, generates a toroidal magnetic field by stretching the magnetic field lines in a longitudinal direction. The combination of the α–effect and the Ω–effect leads to the formation of migratory dynamo waves, where the toroidal field is periodically regenerated and transformed into the poloidal field through the action of the α–effect. These migratory dynamo waves propagate and interact within the star's convective zone, causing the cyclic variations in the magnetic field <cit.>.
According to <cit.>,
the magnetic cycle period for G and K dwarfs with convective turnover times (τ_ c) between 11 and 26 days, is found to be proportional to the rotation period as follows:
1/P_ cyc∝(τ_ c / P_ rot)^n,
where n is 1.25.
We quote theoretical prediction of the relation between
star's activity cycle and its rotation periods, which is
equation (6) in <cit.>:
P_ mag_cyc=2 P_ cyc≈√(R_⋆l)P_ rot.
According to the simple theoretical arguments quoted by <cit.>,
the magnetic cycle period P_ mag_cyc is proportional to the rotation period P_ rot. However, there is a modifying factor, l/R_⋆ the relative depth of turbulence, which depends on the stellar structure, which itself may depend on the effective temperature or B-V colour index of the star. Also l here is the length scale of turbulence and R_⋆ is the stellar radius.
§ METHODS
In our study, we adopt the terminology used by <cit.> to categorize branches into two types: the "inactive" branch, referred to as the short-cycle branch P_ cyc^S and the "active" branch, referred to as the long-cycle branch P_ cyc^L. These terms were introduced first time in <cit.>. According to <cit.> this notation is more accurate and aligned with the actual characteristics of the branches. Therefore, they suggested that these terms should be used in future studies to refer to the two branches.
§.§ Reproduction of <cit.> P_ cyc^S vs. P_ rot Fit
In this subsection, we reproduced the fit between P_ cyc^S and P_ rot data from <cit.> to derive the fit parameters. First, we collected the data in Table<ref>, the first 32 rows, from <cit.>, where we obtained the 32 activity cycles on the short-cycle branch P_ cyc^S calculated by <cit.> along with the 32 corresponding rotation periods P_ rot. These cycle lengths and rotation periods can be found in Table 1. Then we plotted in logarithmic scale the rotation periods on the x-axis versus the calculated cycle period on the y-axis as shown in Figure <ref>, using the empirical relation in <cit.> between the cycle periods and rotation periods in logarithmic terms that is given by:
log P_ cyc≈ a+n log P_ rot.
Since the theoretical relation, equation <ref>
implies a linear connection between P_ cyc and P_ rot, we fitted the data using Python least-square fit, a common technique for determining the best-fitting parameters for a given model, for two different slope adjustments as in <cit.>. Also, we computed the R^2 coefficient of determination to measure how well the model fits the data. A R^2 value of 1 means that the predictions from the regression fit the data perfectly. First, we set the slope n to be 1 and deduced the value of a parameter as a = 1.923 ± 0.025 and the value of R^2= 0.89. The red line in Figure <ref> illustrates this trend. Then we repeated the fit by treating slope n as an independent variable to derive a and n values as equation now <ref> becomes:
log P_ cyc≈ (1.458 ± 0.074)+(1.348 ± 0.054) log P_ rot.
and the value of R^2= 0.95. The blue line in Figure <ref> represents this fit. It is obvious that the n = 1 relation does not fit the short periods data, as <cit.> pointed out.
By comparing the value of a and n parameters here with <cit.>, we find slight differences between these values. As in <cit.> a = 1.918 ± 0.027 for the fit of n=1 , while for the fit where n is treated as a free parameter, a= 1.488 ± 0.092 and n= 1.324 ± 0.067. We noticed two additional points in Figure 1 of <cit.>, which belong to stars HD 100563 and HD 201092. These stars have rotation periods of 7.73 ± 0.04 and 37.8 ± 7.4, respectively, corresponding to cycle lengths of 0.609 ± 0.009 and 11.7 ± 0.4, respectively. Their P_ cyc were taken from <cit.> and <cit.>, respectively, and have not been calculated by <cit.>. We do not have these two points because our plot include only data computed by <cit.>. We also noticed that the locations of some points in our plot differ from those in <cit.> plot, despite using the same data set. We believe these reasons led to the slight difference in the fit parameters between this work and <cit.>.
§.§ Data representation and fit
In this subsection, we repeat the fit between P_ rot and P_ cyc^S using a larger data sample taken from other previous studies. This sample, shown in Table<ref>, contains 94 P_ rot and their 94 corresponding P_ cyc^S. The star ID, spectral type (Sp), color index (B-V), effective temperature (T_ eff), P_ rot and P_ cyc are shown in Table<ref>. Unavailable data is left blank in the table. 32 P_ cyc^S were calculated by <cit.>, the first 32 lines in Table<ref>. The other P_ cyc^S were taken from <cit.>. It should be noted that the 32 stars IDs for which their P_ cyc^S were calculated by <cit.> were used again in the fit but with the P_ cyc^S calculated by others. For illustration, we used two P_ cyc^S values for 32 stars IDs, one was calculated by <cit.> and the other was calculated by another work, except for KIC 10644253, for which we collected three P_ cyc^S calculated by <cit.>. Also, HD 16673 has multiple entries due to the multiple sources, as shown in Table <ref>. References for each P_ rot and P_ cyc^S are shown in Table <ref>.
In the same way as in subsection <ref>, we used the empirical relation between P_ rot and P_ cyc in logarithmic scale given by equation <ref> using the new data set in Table<ref> to produce the fit parameters a and n. We performed a least-square fit in Python to fit the data using two different slope adjustments again, one with a fixed slope n of 1 and another with the n treated as a free variable. This fit is shown in Figure <ref>. For the fit with a fixed slope of 1, we determined the value for the parameter a= 1.889 ± 0.023 and R^2= 0.83. This trend is shown by the red line in Figure <ref>. While for the fit with the slope n treated as a free variable, we deduced values for the parameters a and n as a=1.583 ± 0.064, n=1.257 ± 0.051 and R^2= 0.87. This fit is represented by the blue line in Figure <ref>. So that equation<ref> becomes now
log P_ cyc≈ (1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
We note that our value of n=1.257 ± 0.051 with the extended dataset is
closer to <cit.>'s n=1.25 than <cit.>'s n= 1.324 ± 0.067.
|cccc|cc|cc|
list of star IDs with their parameters, used in previous studies.
1|cHD/KIC 1cT_ eff 1cB-V 1cτ_ c 1|cP_ rot[d] 1c|Ref 1cP_cyc^S[yr] 1c|Ref
8c
– continued from previous page
1|cHD/KIC 1cT_ eff 1cB-V 1cτ_ c 1|cP_ rot[d] 1c|Ref 1cP_cyc^S[yr] 1c|Ref
8|r|Continued on next page
Sun 5777 0.642 33.94 25.4±1 1 10.3 15
HD 3651 5211 0.850 61.18 44 1 11.7 15
HD 4628 5120 0.890 65.19 38.5±2.1 1 9.9 15
HD 10476 5244 0.836 59.83 35.2±1.6 1 9.2 15
HD 10780 5321 0.804 56.87 22.14±0.55 2 5.6 15
HD 16160 5060 0.918 68.16 48±4.7 1 12.4 15
HD 16673 6183 0.524 18.02 5.7 3 0.9 15
HD 17051 6045 0.561 21.98 8.5±0.1 1 1.4 15
HD 22049 5140 0.881 64.27 11.1±0.1 1 2.6 15
HD 26965 5282 0.820 58.33 43 1 11.5 15
HD 30495 5804 0.632 32.16 11.4±0.2 1 1.6 15
HD 32147 4801 1.049 83.93 48 1 11.7 15
HD 43587 5876 0.610 28.58 22.6±1.9 4 10.4 15
HD 75332 6089 0.549 20.60 4.8 5 0.5 15
HD 75732 5167 0.869 63.05 37.4±0.5 6 9.7 15
HD 76151 5714 0.661 37.58 15 1 2.4 15
HD 100180 6013 0.570 23.06 14 1 3.4 15
HD 103095 5449 0.754 52.52 31 1 9.6 15
HD 120136 6245 0.508 16.54 3.05±0.01 7 0.3 15
HD 128621 5098 0.900 66.24 36.2±1.4 1 9.2 15
HD 140538 5645 0.684 42.51 20.71±0.32 8 4.5 15
HD 146233 5741 0.652 35.81 22.7±0.5 1 7.2 15
HD 149661 5265 0.827 58.98 21.1±1.4 1 5.3 15
HD 160346 4975 0.959 72.75 36.4±1.2 1 9 15
HD 165341 A 5188 0.860 62.16 19.9 1 4.9 15
HD 166620 5151 0.876 63.76 42.4±3.7 1 11.1 15
HD 185144 5366 0.786 55.26 27.7±0.77 2 7.3 15
HD 190406 5910 0.600 27.09 13.9±1.5 1 2.6 15
HD 201091 4764 1.069 86.64 35.4±9.2 1 8.3 15
HD 219834 B 5055 0.920 68.38 43 1 11 15
KIC 8006161 5234 0.840 60.21 29.8±3.1 1 7.7 15
KIC 10644253 5943 0.590 25.67 10.9±0.9 1 1.8 15
HD 16673 6183 0.524 18.02 7.4±0.07 5 0.85 5
HD 49933 3.45 5 0.58 5
HD 75332 6089 0.549 20.60 4.8 5 0.49 5
HD 100563 7.73 5 0.61 5
τ Boo 0.480 14.23 3.5 5 0.33 5
Kepler 87 12.59±0.03 9 3.5 16
KIC 10644253 6030 0.590 25.67 10.91±0.87 10 1.5 17
solar analog HD 30495 5826 0.632 32.16 11.36±0.17 11 1.67±0.35 11
solar analog HD 45184 5871 0.620 30.16 19.98±0.02 12 5.14 12
61 Cyg A HD 201091 4545 1.069 86.64 35.7±1.9 13 7.2±1.3 13
102712791 0.277 4.79 0.96±0.03 14 0.09±0.008 14
102720703 0.514 17.08 10.2±0.6 14 0.512±0.055 14
102721955 0.431 10.94 2.17±0.06 14 1.118±0.071 14
102723038 1.404 147.52 8.6±0.5 14 1.682±0.151 14
102726103 0.767 53.62 3.7±0.1 14 0.321±0.022 14
102738457 0.592 25.95 12.9±0.6 14 1.781±0.356 14
102749950 0.657 36.78 5.4±0.2 14 0.655±0.06 14
102750723 1.143 97.45 1.44±0.02 14 0.277±0.022 14
102754736 0.480 14.23 6.9±0.3 14 0.29±0.019 14
102758108 0.641 33.75 6.1±0.2 14 0.301±0.022 14
102770332 2.055 415.00 4.2±0.1 14 1.162±0.112 14
102770893 0.874 63.56 4.3±0.2 14 0.759±0.058 14
102777006 1.177 102.86 1.33±0.02 14 1.17±0.123 14
102778595 1.157 99.64 11.8±0.7 14 0.575±0.019 14
102780281 1.304 125.85 3±0.1 14 0.551±0.041 14
Sun 5778 0.660 37.38 25.4±1 1 11±2 1
HD 3651 5128 0.840 60.21 44 1 13.8±0.4 1
HD 4628 5035 0.890 65.19 38.5±2.1 1 8.6±0.1 1
HD 10476 5188 0.840 60.21 35.2±1.6 1 9.6±0.1 1
HD 16160 4819 0.980 75.21 48±4.7 1 13.2±0.2 1
HD 17051 6053 0.570 23.06 8.5±0.1 1 1.6 1
HD 22049 5152 0.880 64.17 11.1±0.1 1 2.9±0.1 1
HD 26965 5284 0.820 58.33 43 1 10.1±0.1 1
HD 30495 5780 0.630 31.82 11.4±0.2 1 1.7±0.3 1
HD 32147 4745 1.060 85.41 48 1 11.1±0.2 1
HD 76151 5675 0.670 39.44 15 1 2.5±0.1 1
HD 78366 5915 0.630 31.82 9.7±0.6 1 5.9±0.1 1
HD 81809 5623 0.800 56.51 40.2±3 1 8.2±0.1 1
HD 100180 5942 0.570 23.06 14 1 3.6±0.1 1
HD 103095 5035 0.750 52.19 31 1 7.3±0.1 1
HD 114710 5970 0.580 24.33 12.3±1.1 1 9.6±0.3 1
HD 128620 5809 0.710 48.98 22.5±5.9 1 19.2±0.7 1
HD 128621 5230 0.880 64.17 36.2±1.4 1 8.1±0.2 1
HD 146233 5767 0.650 35.42 22.7±0.5 1 7.1 1
HD 149661 5199 0.800 56.51 21.1±1.4 1 4±0.1 1
HD 160346 4797 0.960 72.86 36.4±1.2 1 7±0.1 1
HD 166620 5000 0.900 66.24 42.4±3.7 1 15.8±0.3 1
HD 190406 5847 0.610 28.58 13.9±1.5 1 2.6±0.1 1
HD 201091 4400 1.180 103.35 35.4±9.2 1 7.3±0.1 1
HD 201092 4040 1.370 139.77 37.8±7.4 1 11.7±0.4 1
KIC 8006161 5488 0.840 60.21 29.8±3.1 1 7.4±1.2 1
KIC 10644253 6045 0.590 25.67 10.9±0.9 1 1.5±0.1 1
HD 165341 A 5023 0.780 54.74 19.9 1 5.1±0.1 1
HD 219834 A 5461 0.800 56.51 42 1 21±1 1
HD 219834 B 5136 0.910 67.30 43 1 10±0.2 1
HD 10780 5321 0.804 56.87 22.14±0.55 2 7.53±0.16 2
HD 16673 6183 0.524 18.02 5.7 3 0.847±0.006 5
HD 43587 5876 0.610 28.58 22.6±1.9 4 10.44±3.03 4
HD 75732 5167 0.869 63.05 37.4±0.5 6 10.9 18
HD 185144 5366 0.786 55.26 27.7±0.77 2 6.66±0.05 2
HD 120136 6245 0.508 16.54 3.05±0.01 7 0.333±0.002 7
HD 140538 5645 0.684 42.51 20.71±0.32 8 3.88±0.02 8
14cm
Notes: The table illustrates a list of stars ID with their corresponding B– V values, effective temperature T_ eff, the convective turnover time τ_ c which was calculated by the relation in <cit.>, the rotation period P_ rot with the reference number and the short branch cycle period P_ cyc^S with the reference number.
References: (1) <cit.>, (2) <cit.>, (3) <cit.>, (4) <cit.>, (5) <cit.>, (6) <cit.>, (7) <cit.>, (8) <cit.>, (9) <cit.>,
(10) <cit.>, (11) <cit.>, (12) <cit.>, (13) <cit.>, (14) <cit.>, (15) <cit.>, (16) <cit.>, (17) <cit.>, (18) <cit.>.
§.§ Data Samples
One of the main challenges in studying the relation between cycle length and rotation period is the lack number of well-known and accurately measured activity cycles. This limitation introduces uncertainties in the derived empirical relations <cit.>. To overcome these challenges, it is crucial to obtain more reliable cycle periods, particularly for long-period cycles. Achieving this requires long-term time series observations of stars to gather comprehensive and accurate data on their activity cycles <cit.>. Therefore, when looking for activity cycles, it is more efficient to monitor fast-rotating objects, as cycles can be discovered within a few years of observation, as opposed to stars with longer rotation periods <cit.>. For this reason, we chose our sample for this study to include fast-rotating main-sequence stars of type F and G from Kepler data with well-known rotation periods of less than one day. First, we collected all Kepler IDs which has well-known rotation periods. We then selected targets with rotation periods of less than a day. Using Gaia Data Release 2 (Gaia-DR2), we identified F- and G-type main sequence stars by their effective temperatures and radius based on the Harvard Spectral classification. The ranges of the effective temperature are 6000-7500 K and 5200-6000 K for F and G types, respectively. We thus obtained a total of 811 Kepler IDs of F- and G- type stars with less than one day rotation period. By using the radius restriction of the main-sequence stars as 1.15-1.4 R_⊙ and 0.96-1.15 R_⊙ for F and G types, respectively, the final data sample reduced to 138 Kepler targets with a number of 83 F-type and 55 G-type main-sequence stars. 71.74% of the rotation periods for these stars were taken from <cit.>. 15.94% from <cit.>, 5.07% from <cit.>, 4.35% from <cit.> and 2.90% from <cit.>. These 138 Kepler targets are listed in Table <ref> with their effective temperature, radius, rotation period and the references for these rotation periods.
§ RESULTS
Using a data set of 138 Kepler IDs with P_ rot ranging from 0.202 d to 0.997 d, we provide a
prediction for the corresponding value of their P_ cyc^S, by applying the empirical relation between P_ cyc and P_ rot with the derived parameters in Equation <ref>. Hence we
obtained the predicted values of P_ cyc from
P_ cyc≈ 10^(1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
From equation <ref>, we calculated 138 P_ cyc for 83 F-type and 55 G-type main-sequence stars whose rotation period is less than a day. The shortest P_ cyc is equal to 5.13 d while the longest P_ cyc is equal to 38.14 d. All the 138 predicted P_ cyc are listed in Table <ref>
|cccccc|cccccc|
lists of the 138 Kepler IDs with their parameters and predicted P_ cyc.
1|cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d] 1cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d]
12c
– continued from previous page
1|cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d] 1cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d]
12|r|Continued on next page
757099 5521 1.05 0.36 1 10.60 6877871 6508 1.40 0.54 2 17.73
1028018 5544 1.14 0.62 2 21.03 6948098 6095 1.29 0.57 3 18.76
1721795 6534 1.31 0.89 2 32.93 6961285 5802 0.98 0.45 2 13.99
1872192 5316 0.98 0.67 2 23.31 6962901 5601 0.97 0.98 2 37.37
2557335 5568 1.01 0.24 2 6.20 7199002 6381 1.24 0.57 2 18.89
2558273 6673 1.35 0.99 2 37.85 7199013 5286 0.96 0.57 2 18.89
2715228 6374 1.30 0.99 1 37.80 7199037 6024 1.36 0.57 2 18.89
2715410 5997 1.11 0.90 1 33.53 7354297 5481 1.05 0.95 2 35.99
2849645 5424 1.06 1.00 2 38.14 7461022 6168 1.28 0.59 2 19.76
2985825 6783 1.23 0.94 3 35.18 7678509 6644 1.22 0.96 2 36.51
3124412 6302 1.21 0.93 1 34.94 7707736 5644 1.09 0.76 2 27.11
3241517 6283 1.34 0.78 3 28.19 7816211 6050 1.32 0.29 2 8.08
3352959 6476 1.37 0.76 2 27.07 7909399 6574 1.40 0.82 2 30.01
3356577 6746 1.39 0.63 4 21.58 7915824 6231 1.39 0.74 2 26.22
3448722 5872 1.13 0.41 2 12.60 7973882 5512 1.06 0.35 2 10.27
3448817 6792 1.33 0.95 4 35.78 8016369 6734 1.34 0.77 1 27.56
3459311 5789 1.05 0.98 2 37.37 8043256 6680 1.27 0.93 2 34.71
3550386 6006 1.30 0.32 2 9.10 8144578 6639 1.32 0.59 2 19.85
3836772 6210 1.32 0.69 2 23.88 8197275 5604 1.14 0.44 2 13.52
3869099 5607 1.01 0.29 2 7.94 8264155 6738 1.33 0.91 4 34.08
4175618 5369 1.05 0.41 2 12.60 8264659 5417 1.12 0.97 1 36.84
4283120 6202 1.25 0.52 2 16.71 8285970 5639 1.14 0.57 2 18.72
4374659 5824 1.03 0.23 2 5.87 8313378 6624 1.31 0.54 2 17.73
4386947 5681 1.14 0.65 2 22.10 8382253 5695 1.01 0.63 3 21.37
4464528 6392 1.38 0.22 2 5.81 8393626 5893 1.15 0.43 2 13.06
4464530 6545 1.30 0.22 2 5.77 8420730 5770 1.08 0.25 2 6.53
4570231 5661 0.99 0.54 1 17.64 8651921 6473 1.29 0.95 2 35.65
4660562 5677 0.96 0.77 1 27.56 8687209 5650 1.00 0.77 1 27.56
4762130 6202 1.35 0.80 2 28.78 8804962 6586 1.23 0.90 2 33.53
4774370 6546 1.36 0.93 2 34.85 8892124 5263 1.01 0.72 2 25.38
4816098 6239 1.29 0.95 1 35.89 8916436 6566 1.35 0.87 1 32.13
4850965 5503 1.04 0.61 2 20.40 9146690 5387 1.11 0.72 2 25.20
4949214 6511 1.36 0.92 2 34.52 9206726 6876 1.31 0.46 4 14.61
4949350 6587 1.40 0.88 2 32.37 9306290 5571 1.04 0.82 2 29.97
4949766 6587 1.39 0.81 2 29.19 9393015 5877 1.01 0.24 2 6.40
5038288 5785 0.99 0.88 2 32.51 9456932 5875 0.97 0.53 2 17.24
5107198 6077 1.36 0.36 2 10.67 9474101 5945 1.10 0.21 2 5.32
5273178 6774 1.32 0.88 2 32.65 9594038 6694 1.31 0.94 4 35.56
5397765 6251 1.34 0.94 2 35.47 9640204 6620 1.33 0.53 2 17.32
5426665 6323 1.38 0.39 2 11.80 9640472 6076 1.34 0.34 2 9.68
5444276 6475 1.31 0.71 2 24.71 9710612 5867 1.08 0.39 2 11.80
5450307 6398 1.24 0.99 3 37.85 9730249 6479 1.34 0.91 2 33.77
5480545 6535 1.31 0.93 2 35.09 9896552 6279 1.26 0.87 1 32.13
5514866 5487 0.97 0.28 2 7.66 9897710 5840 1.08 0.43 2 13.21
5514871 5220 1.06 0.28 2 7.66 9965888 5589 1.13 0.31 2 8.82
5543840 6518 1.20 0.82 2 29.69 9970838 6429 1.25 0.96 2 36.42
5623538 6729 1.32 0.99 1 37.80 10023062 6469 1.38 0.89 2 33.11
5623852 5886 1.10 0.57 2 18.89 10134084 5926 1.00 0.55 5 18.06
5629449 6897 1.31 0.71 1 24.89 10490282 5504 1.05 0.79 2 28.42
5646176 6302 1.20 0.99 1 37.80 10614890 5283 1.06 1.00 2 38.14
5795235 6517 1.36 0.91 2 34.00 10809099 6051 1.31 0.91 2 33.91
5898014 6697 1.35 0.83 2 30.20 11017401 5648 1.09 0.80 2 28.96
5988566 6299 1.20 0.44 2 13.52 11018874 6454 1.30 0.99 2 37.99
6114118 6234 1.24 0.94 2 35.32 11247377 6184 1.38 0.40 2 12.02
6114140 6384 1.16 0.93 3 35.13 11349677 6076 1.23 0.84 1 30.75
6145032 6315 1.28 0.81 1 29.37 11400413 6781 1.34 0.76 4 27.27
6149358 6660 1.28 0.89 2 32.93 11498689 5464 1.10 0.31 2 8.78
6219870 5663 1.05 0.81 1 29.37 11653059 6160 1.26 0.29 2 8.08
6224148 6230 1.18 0.20 2 5.13 11924842 5494 1.13 0.84 5 30.75
6385867 5306 1.06 0.58 1 19.30 11969131 6444 1.23 0.63 1 21.42
6386598 6658 1.37 0.76 2 27.20 12067121 6211 1.33 0.43 5 13.25
6391602 5782 0.99 0.42 2 12.83 12108612 5695 1.09 0.71 2 24.76
6421219 6191 1.36 0.79 2 28.51 12119534 5296 0.98 0.64 2 21.97
6449077 6366 1.31 0.94 2 35.51 12121738 6134 1.31 0.73 2 25.73
6529902 6604 1.38 0.29 2 8.08 12157161 6513 1.26 0.78 2 27.79
6693864 6846 1.35 0.86 1 31.67 12157799 6117 1.17 0.89 5 33.07
6836589 5628 1.15 0.73 2 25.91 12354328 5251 0.97 0.81 2 29.33
6846595 6718 1.26 0.99 1 37.80 12356839 5605 1.14 0.35 2 10.05
6854461 6547 1.39 0.95 3 36.03 12418959 6427 1.36 0.78 2 28.10
14cm
Notes: Effective temperature T_ eff and radius R_⊙ was taken from (Gaia-DR2).
References: (1) <cit.>, (2) <cit.>, (3) <cit.>, (4) <cit.>, (5) <cit.>.
After predicting the values of the activity cycles for our extended, compared to <cit.>, data sample, we wish to examine the theoretical prediction given by Equation 2 on short P_ cyc < 1 yr.
This is because the latter equation is a theoretical prediction, based on first physical principles,
as opposed to empirical fit, which lacks any theoretical or conceptual justification.
Therefore, we focused on the activity cycles derived from previous studies, as presented in Table 1. We chose 20 stars whose P_ cyc is less than a year and plot the fit between P_ rot and P_ cyc as shown in Figure <ref> using a simple linear regression without an intercept given by
P_ cyc [ yr]= n P_ rot [ d].
We obtained the slope n= 0.081 ± 0.009 and R^2 value is 0.997, which is an indication of a good fit, despite of a large scatter.
Note that P_ cyc here is in years, as in Figure 14 from <cit.>.
Therefore, for the lower and upper bounds of our
138 Kepler IDs with P_ rot ranging from 0.202 d to 0.997 d,
this simple theoretically justified equation predicts for
P_ cyc=0.081×0.202×365.25=5.98 d and 0.081×0.997×365.25=29.50 d,
which are not very different from applying the more accurate powerlaw fit using equation <ref> of
5.13 d and 38.14 d, respectively.
Finally, we examine the convective turnover time, τ_c, vs.
B-V colour index appearance as in Figure 3 from <cit.>.
In general, direct measurements of convective turnover time are not possible. However, its estimation is possible by analysing stars' rotation and activity data.
As pointed out by <cit.>, scaling the rotation periods with with a colour- or mass-dependent τ_c can reduce scatter in the relation between rotation and activity, leading to a broken power-law fit between activity and the Rossby number, as e.g. in <cit.>.
<cit.> present a comprehensive study of the convective turnover time, τ_c, and its dependence on stellar metallicity and age of main-sequence stars with masses between 0.6-1.6 M_⊙ and they also
remark that there is a substantial variation between the different models,
as e.g. <cit.> using chromospheric and coronal data, obtained a significantly flatter curve for B-V > 0.8 than widely-used <cit.>, see figure 4 from <cit.>.
We plot convective turnover time, τ_c, vs.
B-V colour index in figure <ref>.
Figure <ref> used the following expressions
for the dependence of the convective turnover time τ_c on the B-V color index, as derived from <cit.>:
logτ_c = (1.06±0.07) + (2.33±0.37) ((B-V) - 0.44)
for 0.44 ≤ B - V ≤ 0.71. In the case when B - V > 0.71 then
logτ_c = (1.69±0.12) + (0.69±0.13) ((B-V) - 0.71).
As can be seen in Figure <ref> our range of B-V-colour is larger compared to data from <cit.>.
§ CONCLUSIONS
In this work, we studied the empirical relation between
star activity cycle and rotation period.
First, we reproduced the fit between P_ rot and P_ cyc using <cit.> data
and obtained the following fit parameters
log P_ cyc≈ (1.458 ± 0.074) + (1.348 ± 0.054) log P_ rot,
which are slightly different from the <cit.>'s
a= 1.488 ± 0.092 and n= 1.324 ± 0.067, for
the reasons unknown to us.
Then, using a larger data set made up of 94 P_ rot and their 94 associated P_ cyc taken from prior studies, we again re-examined the fit between P_ rot and P_ cyc and obtained the followinh fit parameters
log P_ cyc≈ (1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
Using these new parameters, we applied this relation to a sample of 83 F-type and 55 G-type main sequence stars whose rotation periods of less than one day, To provide tabular predictions for cases with very short activity cycles, in order
to determine in the future whether or not these short activity cycles are a common occurrence in these stars.
As a result we derived 138 predicted P_ cyc ranging from 5.13 d to 38.14 d, which are listed in Table <ref>.
Usefulness of measuring short stellar activity cycles
hinges on two main general difficulties:
(i) If monitoring program of stellar activity (e.g. activity-related chromospheric emission S-index or similar) is used
as in references such as <cit.>; or <cit.>, then cadence time of observations is too long
e.g. according to table 2 from the latter reference cadence could be 87 observations per year i.e. 365/87 = 4 days. Resolving activity cycles with 5.13≤ P_ cyc≤ 38.14 d with such cadence would be nearly impossible.
(ii) If Kepler data light curves are used for e.g. plotting number of flares per day vs. time then large number of flare-detection would be necessary to have a reliable statistics. However, the problem is long cadence, 30 minutes, for the mainstream Kepler data. The photometer used by Kepler is sensitive to wavelengths ranging from 400 to 865 nm, covering the entire visible spectrum and a fraction of the infrared. The accuracy of the photometer of Kepler is approximately 0.01% or 0.1 mmag, when 30-minute integration times are used while considering stars with a magnitude of 12. Kepler's 30-minute integration detected flare amplitudes less than 0.1% of the stellar value and energies of 2×10^33 ergs. The duration of the flares ranged from one to three hours, with a rapid increase followed by a slow, exponential decline <cit.>. When Kepler data is taken at a higher cadence or sampling rate of one minute, the accuracy of the measurements decreases. However, this higher cadence enables Kepler to detect flares that are too brief to be detected reliably using the main 30-minute integrations. With the one-minute cadence, Kepler can detect flares with energies as low as 10^32 ergs <cit.>.
It is worth noting that earlier studies exist using different observations where the energy involved in the observed transient brightening is estimated to range from 10^25 to 10^29 erg <cit.>. Also, as far as the Sun is concerned, studies exist <cit.> which consider flare frequency as a function of flare energy in the range 10^27to 10^31 erg, but this is applicable to the Sun only.
In order to have a good statistics for Kepler IDs considered, we need to detect flares with energies 10^27-32 ergs in order to see variation number of flares per day on a time scale of 5.13≤ P_ cyc≤ 38.14 d.
To achieve this goal a new space mission is necessary with short time cadence (< 1 minutes) and photometric accuracy < 0.01%.
A typical example of such proposed sample data from the space mission is shown in figure <ref>.
Alternative option could be making more short cadence ground-based s-index monitoring program of stellar activity with cadence ≈ 1 d or less. However it is unclear
whether this is technically feasible.
In any case, the present study provides predictions for 5.13≤ P_ cyc≤ 38.14 d and
we hope that future either space or ground-based observational missions will put to test our predictions.
Unitl such time the jury is still out.
§ ACKNOWLEDGEMENTS
Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts.
Authors would like to thank Deborah Kenny of STScI for kind assistance in obtaining the data, Cozmin Timis and Alex Owen of Queen Mary University of London for the assistance in data handling at the Astronomy Unit.
A. K. Althukair wishes to thank Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia and
Royal Embassy of Saudi Arabia Cultural Bureau in London, UK for the financial support of her PhD scholarship, held at Queen Mary University of London.
§ DATA AVAILABILITY
Some of the data underlying this article were accessed from Mikulski Archive for Space Telescopes (MAST) <https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html>. This paper also has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. The derived data generated in this research will be shared on reasonable request to the corresponding author.
aasjournal
|
http://arxiv.org/abs/2307.06147v1 | 20230712130538 | The dynamics of higher-order novelties | [
"Gabriele Di Bona",
"Alessandro Bellina",
"Giordano De Marzo",
"Angelo Petralia",
"Iacopo Iacopini",
"Vito Latora"
] | physics.soc-ph | [
"physics.soc-ph",
"cs.SI"
] |
New solution of Einstein-Yang-Mills equations
Yuewen ChenYau Mathematical Sciences Center, Tsinghua University, Beijing 100084, P.R. China. E-mail: [email protected],
Jie DuYau Mathematical Sciences Center, Tsinghua University, Beijing, 100084, P.R. China. Yanqi Lake Beijing Institute of Mathematical Sciences and Applications, Beijing 101408, P.R. China. E-mail: [email protected],
Shing-Tung YauYau Mathematical Sciences Center, Tsinghua University, Beijing, 100084, P.R. China. Yanqi Lake Beijing Institute of Mathematical Sciences and Applications, Beijing 101408, P.R. China. Department of Mathematics, Harvard University, Cambridge, MA 02138, USA. E-mail: [email protected]
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
§ INTRODUCTION
As humans, we experience novelties as part of our daily life. By the term novelty we generally indicate two apparently different things <cit.>
On the one hand, we can think of a novelty as the first time we visit a neighborhood, enter a newly launched pub, or listen to a song from an artist we previously did not know. In this case, the novelty represents a discovery for a single individual of a place, an artist or, more in general, an item. On the other hand, there are discoveries that are new to the entire population, as could be a technological advancement or the development of a new drug. However, these two cases are not entirely distinct, as the second set of novelties, those new to everyone, represent just as a subset of the first one.
Analysing how novelties emerge both at the individual level, and at the level of the entire population, is key to understand human creativity and the neural and social mechanisms that can lead to new discoveries and innovation.
The increasing availability of data on human behavior and consumption habits has allowed to study how humans explore the world, how novelties emerge in different contexts, and how they are distributed in time <cit.>. Empirical investigations cover a broad range of different areas <cit.>ranging from science <cit.> and language <cit.>, to gastronomy <cit.>, goods and products <cit.>, network science <cit.>, information <cit.>, and cinema <cit.>.
No matter the topic, one can always represent data coming from real-world exploration processes as sequences of items that are sequentially adopted or consumed <cit.>.
In this way, the activity of a user of, for example, an online digital music platform is turned into a sequence of listened songs, and a novelty is defined as the first time a song, or an artist, appears in the sequence <cit.>.
Analogously, articles published in a scientific journal can be turned into a time-ordered sequence of concepts or keywords discovered by the community, and a novelty can be defined, again, as the first-time appearance of a keyword <cit.>.
Under this framework, evidence shows that —independently of the system they belong to— novelties
seem to obey the same statistical patterns on the way they are distributed and correlated in time <cit.>. In particular, most empirical sequences follow Heaps' <cit.>, Zipf's <cit.>, and Taylor's laws <cit.>.
Along with data-driven investigations, a relevant scientific problem is that of finding plausible mechanisms to reproduce and explain the empirical observations. What are the rules controlling the appearance of new items in a sequence? How do humans explore the seemingly infinite space of possibilities in search of novelties? Interestingly, an insightful answer comes from biology, when, in 1996, Stuart Kauffman introduced the concept of the adjacent possible <cit.> (AP)—“all those molecular species that are not members of the actual, but are one reaction step away from the actual". Inspired by previous works by Packard and Langton <cit.>, the AP provides a fresh view on the problem, for which discoveries (the possible) can only be found among those items which are close (the adjacent) to what is already known (the actual). New discoveries would then generate an expanding space of opportunities that are only available to us in the moment we “unlock” what is adjacent to them.
Kauffman's AP has seen many interesting applications ranging from biology <cit.> and economics <cit.> to models of discovery and innovation processes. Among these, of particular interest is the recently proposed Urn Model with Triggering (UMT) <cit.>. Building upon the work of Pólya <cit.>, the UMT adds to the traditional reinforcement mechanism of the Pólya urn's scheme a triggering mechanism that expands the space of possible discoveries upon the extraction of each novelty. Being able to reproduce the empirical laws,
the UMT has been used to study the rise and fall of popularity in technological and artistic productions <cit.>, the emergence and evolution of social networks <cit.>, and the evolution of the cryptocurrency ecosystem <cit.>.
The AP accounts for the emergence of the new starting from the “edge of what is known”. In this view, one could also picture ideas, concepts, or items as the linked elements of an abstract network. Within this framework,
the way we explore the world based on the association of different concepts can be modelled as a random walk over this network.
Approaches based on random walks have been used to investigate the cognitive growth of knowledge in scientific disciplines <cit.>, and further extended to account for multi-agent systems, where the individual exploration of the agent is enriched by social interactions <cit.>.
The idea of the adjacent possible, which can be modelled either in terms of extractions from urns or random walks over a network, is of great importance to understand the processes leading to innovation.
There is, however, another important mechanism of creation of the new which is neglected by the frameworks discussed above: novelties can arise from the combination of already-known elements.
For instance, a meaningless sequence of words, if ordered in a different way, may generate elegant poetry. Novel combinations of existing hashtags may lead to new social-media trends. Different orderings of the same musical notes may in principle generate an endless number of songs.
The mechanics of combination and association of “pre-existing” items has been studied in various fields, e.g. in biology, where combinations are the keys to produce new entities and organisms. For instance, it has been shown that the immune system recombines existing segments of genes to produce new receptors <cit.>.
Publications and collaborations in
science <cit.>
are typically combinations of research ideas <cit.> and expertises <cit.>.
Similarly, in innovation economics, as originally discussed by Schumpeter <cit.> and confirmed by recent works on the generation of technologies <cit.>, new associations of existing factors may give rise to innovations, which rule out of the market obsolete products and services <cit.>, thus increasing the probability of reaching further novelties and innovation (the so-called “creative destruction”).
The aim of this paper is to explore a more general notion of novelty defined as the
novel combinations of existing elements.
We thus investigate the dynamics of “higher-order” novelties, i.e., novel combinations of pairs, triplets, etc., of consecutive items in a sequence.
In particular, we focus on the Heaps' law, which describes the growth in the number of novelties as a power-law, whose exponent is a proxy for the rate of discovery <cit.> in a system.
Namely, we introduce higher-order Heaps' laws to characterize the rate at which novel combinations of two and more elements appear in a sequence.
We then analyse various types of empirical sequences ranging from music listening records, to words in texts, and concepts in scientific articles, finding that Heaps' laws also holds at higher orders.
We discover that individual processes with the same rate of discovery of single items, can instead display different rates of discovery at higher orders, and can hence be differentiated in this way.
We therefore propose a new model which is capable of reproducing all these empirically observed features of higher-order Heaps' laws.
In our model the process of exploration is described as an edge-reinforced random walks with triggering (ERRWT). In our framework,
the novelties at different orders
(nodes and links visited for the first time by the walker)
shape the explored network by reinforcing traversed links while, at the same time, triggering the expansion of the adjacent possible. This expansion can happen whenever a node is visited for the first time, making other nodes accessible to the explorer, but also whenever a link is firstly used. In this case, the newly established connection will trigger novel combinations between previously explored nodes. By fitting the contributions of the two mechanisms of reinforcement and triggering, the ERRWT is able to reproduces well the variety of scaling exponents found in real systems for the Heaps' laws at different orders.
§ RESULTS
§.§ Higher-order Heaps' laws
An exploration process can be represented as an ordered set of T symbols 𝒮 = {a_1, a_2, …, a_T }.
Such a set describes the sequence of “events” or “items” produced along the journey, e.g. the songs listened by a given individual over time, the list of hashtags posted on an online social network, the list of words in a text, or any other ordered list of items or ideas generated by single individuals or social groups <cit.>.
Similarly, in the context of some recent modelling schemes of discovery, 𝒮 can represent the balls extracted from an urn <cit.>, or the nodes visited over time by a random walker moving over a network <cit.>.
Although real-world events have an associated time, here, for simplicity, we focus only on their sequence, i.e. the relative temporal order of the events, neglecting the precise time at which they happen.
For instance, if a person listens to song a_1 at time t_1, song a_2 at time t_2, song a_i at time t_i, and so on, with t_1< t_2< … < t_i < …, we neglect these times and only retain the order of the songs in the sequence
{a_1, a_2, …, a_T }.
In other words, we assume that a_1 is associated to the discrete time t=1, a_2 is associated to time t=2, and so forth.
Among the different ways to characterize the discovery rate of a given process, the Heaps' law, D(t) ∼ t^β, describes the power-law growth of the number of novelties as a function of time, i.e., how the number D(t) of novel elements in the sequence 𝒮 scale with the sequence length t <cit.>. The so-called (standard) Heaps' exponent β, that from now on we indicate as 1st-order Heaps' exponent β_1, is thus a measure of the pace of discovery of the process that generated the considered sequence.
Given that the number of different elements D(t) is smaller (or equal) than the total length t of the sequence, the value of β_1 is always bounded in the interval [0,1], with the extreme case β_1=1 reached by a process that generates new elements at a linear rate.
Here, we propose to go one step beyond and look at novelties as novel pairs, triples, and higher-order combinations of consecutive symbols in a sequence <cit.>.
For instance, when exploring a network, a novel pair is represented by the first visit of a link.
In order to measure the pace of discovery of these higher-order compounds starting from a sequence of events 𝒮, we first create
the surrogate sequence of overlapping pairs 𝒮_2 = {(a_1, a_2), (a_2, a_3), …, (a_T-1,a_T) }.
Considering for example the sentence “One ring to rule them all”, from the sequence of events 𝒮 = {one, ring, to, rule, them, all } we obtain the sequence of overlapping pairs 𝒮_2 = {(one, ring), (ring, to), (to, rule), (rule, them), (them, all)}. From 𝒮_2 we can then compute the number D_2(t) of different pairs among the first t ones, with t ≤ T-1.
Notice that, in this manuscript, we consider the pairs (one, ring) and (ring, one) as two different pairs, i.e., order matters.
By construction, we always have D_1(t) ≤ D_2(t) ≤ t, since, on the one hand, for each new element added to 𝒮 there is a new pair in 𝒮_2, and, on the other hand, there cannot be more than t different pairs among t items.
From the power-law scaling D_2(t) ∼ t^β_2, we can then extract the value of β_2, which we refer to as the 2nd-order Heaps' exponent.
This definition can be naturally extended to any order n, considering the sequence 𝒮_n of consecutive overlapping n-tuples present in 𝒮.
Notice that, if |𝒮| = T, then |𝒮_n| = T - n + 1. We can hence compute the number D_n(t) of different tuples among the first t tuples in 𝒮_n, and extract the nth-order Heaps' exponent β_n ∈ [0,1] from D_n(t) ∼ t^β_n.
Notice also that the nth-order Heaps' exponent can be also interpreted as the first order Heaps' exponent of a sequence whose events are the overlapping n-tuples of the original sequence.
Finally, it is worth remarking that such an approach is close to the analysis of Zipf's law in linguistic data for n-grams or sentences <cit.>. In this context, studies showed that as one moves from graphemes, to words, sentences, and n-grams, the Zipf's exponent (reciprocal of the Zipf's for infinitely long sequences <cit.>) gradually diminishes. This implies that n-grams or sentences are characterized by a larger novelty rate than words, a behavior analogous to what we have discussed above.
§.§ Analysis of real-world data sequences
We start investigating the emergence of novelties of different orders in empirical exploration processes associated to three different data sets.
These data sets are substantially different in nature, since they refer, respectively, to songs listened by users of Last.fm, words in books collected in the Project Gutenberg, and words of titles of scientific journals from Semantic Scholar (more details on the data can be found in Materials and Methods).
In Fig. <ref>(a-c) we plot the average temporal evolution of the number D_n(t) of novelties of order n, with n=1, 2, 3, in the three datasets (from left to right, respectively, Last.fm, Project Gutenberg, Semantic Scholar).
In order to avoid spurious effects due to different lengths of the sequences, we restrict the averages to the sequences of length T greater than the median length T̃ in the corresponding data set (see Fig. <ref> in the Supplementary Information (SI) for their distribution). Each continuous curve, plotted up to time T̃, is obtained by averaging D_n(t) over all such sequences, while the shaded area represents one standard deviation above and below the mean.
We also perform power-law fits (see Materials and Methods for details on the procedure), and plot the resulting curves as dashed lines.
Focusing first on the broadly-studied (1st-order) Heaps' law, notice how the power-law fit is only accurate in the last part of the sequence.
This highlights that the Heaps' law starts after a transient phase, where most of the events are new for the individual, as also reported in Ref. <cit.> and similarly reported in other contexts <cit.>.
Secondly, notice how the nth-order Heaps' law, with n=2,3, is valid across the data sets, but with different values of the fitted exponents, especially for n=2.
Finally, as expected from their definition, the fitted Heaps' exponents of order n+1, i.e., β_n+1, are higher than the lower-order ones, that is, β_n+1≥β_n.
To explore the gain in information brought by the higher-order Heaps' exponents with respect to the 1st-order Heaps', we now look directly at individual sequences. Figure <ref>(d-i) shows the scatter plots of β_2 (d-f) and β_3 (g-i) against β_1, where each point refers to a single sequence from Last.fm (d,g), Project Gutenberg (e,h) or Semantic Scholar (f,i), with colors representing the density of points (see color bar at the bottom of the figure).
Here, we have only considered sequences whose fitted exponent has a standard error below the 0.05 threshold (see Table <ref> in SI for more details).
This filtering removes 30 (3.37%), 8 (0.04%), and 5 (0.03%) sequences in the three datasets, respectively.
This shows that, in almost all cases, we can consider the Heaps' law assumption to be valid.
Looking at the plots, we notice that some cases have a higher density of points compared to others.
For example, in (d), we see how users of Last.fm sharing the same value of β_1 can have very different values of β_2.
Conversely, the other two data sets present stronger correlation between β_2 and β_1.
To quantitatively characterize this, we fit a linear model with an ordinary least squares method, displayed in each plot as a red dotted line. In the legend we also report the value of the related coefficient of determination R^2, which represents the percentage of variance of the dependent variable explained by the linear fit with the independent variable.
For users of Last.fm, at both orders n=2 and 3, we quantitatively confirm that points are much more spread around the linear fit, since the values of R^2 are very low, between 0.11 and 0.16.
In the other two data sets there is instead a higher correlation between β_1 and both β_2 (R^2 around 0.70) and β_3 (R^2 around 0.35).
Moreover, the values of the parameters of the linear fit greatly change across datasets and orders.
In particular, in (d) there is a much lower slope and higher intercept compared to the other data sets for the same order in (e-f).
Furthermore, we notice how, for each data set, the higher the order, the lower the fitted slope —and the higher the intercept of the linear model.
Finally, on an aggregate level, we observe that at all orders the distribution of the Heaps' exponents are very different across data sets (see Fig. <ref> in SI for a comparative figure, while further statistical information on the Heaps' exponents distribution can be found in Table <ref> in SI.).
The exponents are more spread in Last.fm, which also shows a higher average of β_1 and β_2, but a lower one for β_3 compared to the other data sets.
Distributions for Project Gutenberg and Semantic Scholar, which are both related to linguistic data, are more peaked —at higher values for the latter dataset.
This could be the result of how titles of scientific papers are written with respect to books or poems, that is, concentrating the whole message of a scientific work in a few significant words, avoiding stop-words and repetition. In addition, scientific advancements tend to favor the combinations of previously existing scientific concepts to form new ones, while the same does not apply to non-scientific literature in general, where instead similar constructions tend to be repeated across the piece.
Finally, similar results are obtained also for more coarse-grained sequences generated using artists and stemmed words instead of songs and words (see Fig <ref> in SI).
§.§ Analysis of existing models
After studying higher-order Heaps' laws in real data, we check whether the observed patterns can be also reproduced by the available models for discovery processes.
We start from the Urn Model with Triggering (UMT), where a sequence of events is generated by draws of coloured balls from an urn <cit.>, different colours corresponding to different events/items being discovered/adopted and so on. In the UMT, for each extracted ball, the corresponding color is reinforced by adding ρ additional balls, of the same color, to the urn. At the same time, whenever a novel color is drawn, the discovery triggers the addition of ν+1 balls of new different colors to the urn (see detailed model definition in Materials and Methods).
Previous studies have shown that the 1st-order Heaps' law is verified in sequences obtained with the UMT <cit.>. In particular, the number of novelties in the model grows asymptotically as D_1(t)∼ t^ν/ρ when ν<ρ, while a linear behaviour is found in the other cases. We hence focus on the most interesting case, that is for ν≤ρ, studying how variations of the two parameters ρ and ν, respectively representing the reinforcement and the increase in size of the adjacent possible, affect the Heaps' law at various orders.
Since the pace of discovery effectively depends only on the fraction ν/ρ, we fix ρ = 20 and numerically simulate the UMT with ν = 1, 2, 3,…, 20 for T=10^5 time-steps.
For each set of parameters we run 100 simulations, generating a total of 2 × 10^3 synthetic sequences.
Then, for each generated sequence, we compute the temporal evolution of the number of novelties D_n(t), and estimate a power-law fit, extracting the related nth-order Heaps' exponent β_n.
In Fig. <ref>(a), we show how the extracted values of β_2 change with respect to β_1 across simulations.
The color represents the value of the parameter ν, as shown in the color bar.
We observe that, although the exponents are distributed all across the interval (0,1), the points (β_1,β_2) are just above the bisector (gray dashed line).
Moreover, for a certain value of β_1, the model produces very similar values of β_2 that do not vary much.
We can derive an analytical approximation of the higher-order Heaps' exponents for this model.
As we show in Sec. <ref> of the SI, for the UMT the number of unique pairs grows as
D_2(t) ≈ a t^β_2, with β_2 = β_1 + c/d+log(t) ,
where a, c, d>0 depend on the parameters ρ and ν, and β_1 = ν/ρ.
Although the predicted 2nd-order exponent is slightly higher than the 1st-order one, their difference just depends on the sequence length, and vanishes at larger times.
In other words, the increased value of the higher-order Heaps' exponent is only due to a finite time effect, and the UMT struggles in reproducing the empirical patterns discussed in Fig. <ref>.
We repeat the analysis for the Urn Model with Semantic Triggering (UMST) <cit.> and the Edge-Reinforced Random Walk (ERRW) <cit.>, which have also been proved to generate discovery sequences obeying to the Heaps' law.
These models share the same foundations of the UMT, but with some crucial differences. The UMST builds on top of the UMT introducing also semantic groups for colors (topic common to different items). This addition effectively diminishes the probability to draw colors outside of the semantic group of the last extracted color by a factor η.
The ERRW is formulated as a network exploration rather than a process of extractions from an urn. Instead of a sequence of extracted balls, the ERRW features a set of nodes sequentially visited by a random walker over a weighted networks, where the weight of visited edges are reinforced at each time by ρ. A full description of the models can be found in Materials and Methods.
We simulate the UMST with parameters η=0.1, ρ=4, ν = 1, 2, …, 20, while the ERRW runs over a small-world network (with average degree ⟨ k ⟩ = 4 and rewiring probability p = 0.1), with edge-reinforcement ρ ranging from 0.1 to 10.
Similarly to the exploration of the UMT, we perform 100 simulations for each set of parameters and report the results in Fig. <ref>(b-c).
For both UMST and ERRW, we find that the values of β_2 do not differ much from their corresponding value of β_1—as shown by the great proximity of the points (β_1, β_2) to the bisector.
This means that also these models fail to reproduce the empirical variability of higher-order Heaps' exponents with respect to the 1st-order one.
Moreover, we notice in (b) that for the UMST we only obtain exponents with either very low (up to 0.4) or very high (close to 1) values.
It seems thus that there is an abrupt transition between the two cases, with the model not able to cover the values in-between. This is instead a crucial point when we are confronted with the empirical values reported in Fig. <ref> (see also the relation with analytical results in Fig <ref> in SI).
Overall, with the analyses above we have just shown that while the existing models for discovery and innovation dynamics are able to reproduce the empirically observed pace of discovery of new items —as singletons—, they systematically fail when it comes to capturing the distributions of the Heaps' exponents of higher order and their correlations.
§.§ A model for higher-order Heaps' laws
We now introduce a model that generates synthetic sequences displaying different Heaps' exponents at various orders.
As for the previously discussed ERRW, our novel model is formulated using a network framework in which: (i) the items to be explored correspond to the nodes of the network; (ii) links between nodes represent semantic associations between items that one can use to move from one to another; (iii) the exploration process is modelled as a random walk over the network, and the exploration sequence is given by the list of visited nodes.
Under these assumptions, the first visit of a node corresponds to a 1st-order novelty, while a 2nd-order novelty refers to the first exploration of a link. This definition can be trivially extended to higher orders, but in this manuscript, for simplicity, we limit our attention to the first two orders.
The ERRW proposed in Ref. <cit.> consists of a walker exploring a static network with a fixed topology, whose movements modify only the weights of the links. By contrast, in our model the network structure (not just the weights) co-evolves over time together with the exploration process such that new links can be triggered. Thus, blending together the ERRW and the UMT <cit.>, we call the model Edge-Reinforced Random Walk with
Triggering (ERRWT). More specifically, the model is based on two different triggering mechanisms that add new edges and new nodes every time a novelty appears.
As per the UMT and the ERRW, exploring a node for the first time triggers the expansion of the adjacent possible, as new nodes become now accessible. For example, the invention of the transistor made it possible to create mobile phones, among other things.
Concerning the triggering of new edges, the idea is that whenever two elements are associated for the first time, new possible combinations involving one of these elements are then triggered. For instance, once a camera and a mobile phone were firstly combined, this made clear that many more functions could be added to the latter, e.g., a music player, a game console, a GPS, etc.
The basic mechanisms of the ERRWT model are illustrated in Fig. <ref>. Suppose that, at a given time t, the walker is at node i of a network composed of some already visited nodes and links (filled nodes and continuous lines), and some others that belong to the adjacent possible (unfilled nodes and dashed lines). This is the starting point of Fig. <ref>(a). In Fig. <ref>(b), the walker crosses an already explored link, and its weight gets reinforced by a term ρ, meaning that the association of the two nodes becomes more likely. This is the same reinforcement process of the ERRW in Ref. <cit.>.
If addition, if the edge is instead traversed for the first time, along with the edge-reinforcement the process triggers also the creation of new edges. In particular, as displayed in Fig. <ref>(c), ν_2+1 new edges connecting the second node of the traversed link to other already-visited nodes are created. Finally, analogously to the triggering mechanism of the UMT, whenever a node is visited for the first time, it triggers the expansion of the node's adjacent possible with ν_1+1 new nodes added to the network and connected to the node itself (Fig. <ref>(d)).
Note that this also triggers the creation of other ν_2+1 new links to already known elements, since whenever a node is explored for the first time, also the link leading to it is explored for the first time.
More details about the ERRWT can be found in Materials and Methods.
Balancing edge reinforcement and the node and edge triggering through the parameters ρ, ν_1 and ν_2, it is possible to control the pace of discovery of new nodes and edges, and consequently the exponents of the 1st-order and the 2nd-order Heaps' law associated to the sequences produced by the model.
To systematically explore this, we simulate the ERRWT model with parameters ρ=10, ν_1 = 0, 1, …, 20, and ν_2 = 0, 1, …, 2ν_1, running 100 simulations for each set of parameters.
Higher values of ν_2 have not been considered since they produce the same exponents as those for ν_2=2ν_1.
Fig. <ref>(a) reports the increase in the number of
1st-order and 2nd-order novelties (continuous lines). The power-law fits (dashed lines) highlight that the Heaps' law is verified at higher-orders too, leading to an increase of the exponents values (from β_1=0.56 to β_2=0.87) as we increase the order. The relationship between the different orders is explored in Fig. <ref>(b), where we show the scatter plot between the 1st- and 2nd-order Heaps' exponent. Each point refers to a different simulation, and we use the color to indicate the value of the used parameter ν_1 (see color bar).
We notice that the ERRWT produces a wide range of exponents at both orders, which are no more trivially correlated as for previous models.
This is even more clear when we look at Fig. <ref>(c), where Heaps' exponents are averaged across simulations for each set of parameters: each trajectory relates to a different value of ν_1, with ν_1 increasing from 1 to 20 from bottom left to top right of the panel. The color represents instead the variation of the parameter ν_2 from 0 to 2ν_1.
For reference, we also flag using a red dot the pair of exponents related to the parameters used in Fig. <ref>(a).
We can immediately notice how the 1st- and 2nd-order Heaps' exponents increase as ν_1 becomes larger. More interestingly, we can investigate the interplay with ν_2: given a single trajectory, by increasing ν_2 the difference between β_1 and β_2 becomes larger, and the point (β_1, β_2) moves away from the bisector —in a way that depends on the specific value of ν_1. In particular, for low values of ν_1, the trajectories are almost vertical, with only β_2 increasing. Instead, for higher values of ν_1, especially when ν_1 ≥ρ, an increase of ν_2 produces a decrease of β_1, while the value of β_2, which is close to its upper bound value 1, does not change.
It is also possible to perform an analytical investigation of a simplified version of the ERRWT model, which leads to results which are in agreement (see Sec. <ref> in SI). In particular, for such a model, we can prove that the values of the asymptotic Heaps' exponents β_1 and β_2 depend on the two ratios ν_1/ρ and ν_2/ρ. Moreover, we find that, for ν_1/ρ > 1, the 2nd-order Heaps' exponent is asymptotically equal to 1, while the 1st-order one depends on ν_1/ν_2, in agreement with our numerical results.
Finally, the exponents are asymptotically bounded by β_1 ≤β_2 ≤ 2β_1, as also shown in the simulations in Fig. <ref>(c).
This also explains why the exponents do not change when we increase ν_2 above 2ν_1.
§.§ Comparison between ERRWT and real-world data
To show that the ERRWT model is able to reproduce the properties observed in real-world processes, we now fit it to the three data sets analyzed (Last.fm, Project Gutenberg and Semantic Scholar).
Given an empirical sequence and its pair of 1st- and 2nd-order Heaps exponents (β_1, β_2),
we compute the Euclidean distance between the pair (β_1, β_2) and each of the pairs of exponents (β_1', β_2') obtained by simulating the ERRWT model using the sets of parameters considered in the previous section. We then select the best model parameters by minimizing the average distance over 100 simulations for each set, and repeat the procedure for all the sequences of the three data sets.
Figure <ref>(a) shows the probability density distribution of the distances between the empirical sequences and the simulations of the best-performing ERRWT model.
Notice how these distances are almost all below 0.1, that is to the uncertainty we expect on the values of the parameters. In fact, being ν_1, ν_2 integers and ρ = 10, the maximum precision we can gain on the estimate of the best parameters is 1/ρ = 0.1.
The percentage of sequences with higher distance than this threshold is 7.67%, 0.73%, and 0.05% for Last.fm, Project Gutenberg, and Semantic Scholar, respectively.
The scatter plots of the best-fitted parameters ν_1 and ν_2 for the three data sets are shown in Fig. <ref>(b-d). The colors here indicate the number of empirical sequences which are best represented by each pair of parameters.
We notice that most of the sequences of Last.fm are characterized by relatively large values of ν_1. Since ν_1 is related to the triggering of new nodes, this result indicates that the discovery of a new song exposes the user to a large variety of related songs, which were previously not accessible and can now be discovered.
Conversely, the parameter ν_2, which refers to the triggering of new edges between already existing items, takes values in a larger range, predominantly skewed towards the lower end.
This suggests that, once a new association of two songs is established by a user, there is a high probability that the same association will be repeated over and over.
Consequently, the user will preferably listen to songs in a similar order, instead of creating new associations.
In the case of Project Gutenberg, most sequences have ν_2 > ν_1.
This implies that writers tend to frequently generate new word associations, highlighting the incredible variety of expressions we can make combining a limited set of words.
Finally, Semantic Scholar exhibits values of ν_1 and ν_2 similar to Project Gutenberg.
However, some sequences of Semantic Scholar have a relatively high value of ν_1 with respect to ν_2.
This is an indication that, when choosing words for titles, authors tend to use more original words, while the pace of creation of new word associations remains similar.
§ DISCUSSION
The ubiquitous appearance of the Heaps' law in various contexts has recently allowed the measurement of the pace at which discoveries occur <cit.>.
However, there is more and more evidence that discoveries are often made from the combination of different elements together <cit.>.
In this manuscript we have hence introduced the higher-order Heaps' exponents as a measure for the pace of new combinations realised in a system.
In particular, we regard a novelty not only as the discovery of new items, but also as the first appearance of a new combination of different items.
Notice how this measure differs from other measures for the pace of discovery that have been developed in the last years. For example, in Refs. <cit.>, the authors have used the number of all possible valid combinations that can be created using the elements so far acquired as a proxy of the level of innovation reached by the system. However, this does not take into account the actual number of novelties realised in the system, but rather their potential.
As we have seen in empirical data, higher-order Heaps' exponents can be used to distinguish users listening to music in Last.fm who feature a similar discovery rate of new songs and artists. The higher-order Heaps' exponent can indeed tell apart different ways to explore the same set of songs in terms of number of different consecutive pairs or higher-order structures explored.
Analogously, we notice different patterns in texts of various nature by studying their Heaps' exponents at various orders: titles of peer-reviewed papers published in scientific journals show more creative juxtaposition of words with respect to the text of narrative books, encountering many more new n-grams, even if the total set of words used is similar in length.
Overall, our analysis shows that the space of possibilities grows in more complex ways, which does not depend solely on the balance between old items to exploit and new ones to explore, but also on the structure of their associations.
Moreover, we have studied the emergence of higher-order Heaps' exponents from synthetic sequences generated through existing models of discovery, from the urn model with triggering <cit.> to the edge-reinforced random walk <cit.>.
On the one hand, these models are able to reproduce different behaviors in terms of 1st-order Heaps' exponents. On the other hand, however, we find that they are not able to reproduce higher-order ones.
This analysis manifests the need for a new generation of exploit-vs-explore models based on the co-evolution of the network structure with the dynamical process of exploration.
We have hence proposed a new modelling framework, the Edge-Reinforced Random Walk with Triggering, which takes into account not only the exploration rate of new items, but also the predisposition to explore the same content in a more creative way. Based on the reinforcement of exploited links of a complex network and the triggering of new nodes and links whenever new parts of the adjacent possible space are explored, these mechanisms give a new intuition of how the space of possibilities grows over time, shedding light on how novel elements and combinations emerge along the process.
We acknowledge there are multiple venues of improvement of the modelling scheme we have proposed.
For example, future work should investigate the interplay between initial knowledge, either of the individual or of the society, and the pace of discovery at various orders during the exploration process.
In our model, we have supposed that links start with unitary weight, but this is can be an unrealistic assumption in certain contexts.
Moreover, we have assumed to trigger new links uniformly at random. It would be interesting to study cases in which the space has some preferential pathways, for example represented by an underlying structure that can be discovered. This could be implemented in our model by limiting the addition of new links to only those permitted by the underlying network, or adding more complex ways to trigger edges, e.g., using preferential attachment <cit.>.
Finally, in this manuscript we have not considered the presence of semantic correlations in the temporal sequence of visited items, which can be a consequence of the interplay between the network topology and a predisposition to move to items semantically close to the recent ones, reinforcing a clustered structure. It would indeed be interesting to use higher-order Heaps' exponents and the ERRWT to study phenomena related to waves of novelties <cit.> and popularity <cit.>. Moreover, the ERRWT could be extended to a multi-agent model to study how different agents would cooperate and diffuse knowledge <cit.>, also taking into account the presence of a limited attention capacity and memory that could influence the rise and fall of popular items <cit.>.
We believe that our model can be directly used to answer these questions and, more in general, to better understand the fundamental mechanism behind innovation and creativity.
§ MATERIALS AND METHODS
§.§ Data
In this work we consider three different data sets on music listening records (Last.fm), books (Project Gutenberg), and scientific articles (Semantic Scholar).
Last.fm is a digital platform for music born in 2002, famous for logging all listening activities of its users, providing both personal recommendations and a space to interact with other users interested in music <cit.>. In this manuscript, we use a data set presented in Ref. <cit.> and available at Ref. <cit.>. It contains all listening records of about 1000 users. In order to have sequences long enough for statistically relevant fits, only users with more than 1000 logs have been retained. The final data set contains 890 users having a median number of listened records of 13 985.
Each record contains the timestamp at which a user listened to a given song.
In the database, each song is associated to a title, the artist's name and a unique MusicBrainz Identifier (MBID), which can be used to obtain additional metadata <cit.>.
Using this information, we are able to create, for each user, a temporally ordered sequence of songs together with the associated sequence of artists.
Project Gutenberg is an open access text corpus containing more than 50 000 books of different nature. Here, we make use of the Standardized Project Gutenberg Corpus <cit.>, which allows to download and process an updated version of the corpus. Using Google's Compact Language Detector 3 ( package in Python), we filter out all non-English texts. We then discard all texts with less than 1000 words, retaining a total of 19 637 books with a median number of 50 726 words.
A sequence of events for each book is hence created with the lemmatized words, disregarding punctuation and putting all characters in lower case.
We also extract stems from each word using the English Snowball stemmer <cit.>—a more accurate extension of the Porter stemmer <cit.>—, which is not as aggressive as the Lancaster stemmer <cit.>.
Semantic Scholar is a recent project with the scope of facilitating scientific analysis of academic publications. It provides monthly snapshots of research papers published in all fields, publicly accessible through the Semantic Scholar Academic Graph (S2AG, pronounced “stag") <cit.>.
This database (1st Jan. 2022 snapshot) contains about 203.6M papers, 76.4M authors, and 2B citations.
It also classifies each paper into one or more fields of study <cit.>, for a total of 19 different fields.
For simplicity, we associate each paper to its first (and most relevant) field of study.
To create the sequences to analyze, for each field we consider the first 1000 journals in terms of number of English papers. Then, for each journal, we order the published papers based on the respective year of publication, volume, issue, and first page. When some of this information is not available, the Semantic Scholar unique ID of the paper is also used in the ordering process. Thus, for each paper, we extract and lemmatize their title, similarly to what done for the Project Gutenberg.
Finally, a sequence of events is created for each selected journal, concatenating the lemmatized words in the titles of each paper in their temporal order, for a total of 19 000 sequences with median length of 9 114.5.
Associated to this sequence, we also consider the sequence of stemmed words for further analysis.
§.§ Power-law fit
Fundamental for the estimation of the higher-order Heaps' exponent of a sequence is the power-law fitting procedure for the number of novel n-tuples D_n(t) as a function of the sequence length t, with n ≥ 1.
The sequences analyzed in this work come from very different contexts, from empirical data sets to model simulations. We thus need to take into consideration all those cases that show a transient regime—whose length might also depend on the system structure <cit.>—in which the pace of discovery can fluctuate before reaching its stationary value.
We fit each sequence according to the following procedure. To reduce computational times, we first logarithmically sample 1000 points from each sequence in the range [1, T].
Considering their integer part and discarding all duplicates, we obtain a set of k integer times {t_i}_i=1,…,k between 1 and T.
If T ≥ 1000, that is the case of all sequences used in this manuscript, then this process results in k ≥ 424 points.
Taking into account that the associated sequence of n-tuples has length T-n+1, we thus consider the points {(t_i-n+1, D_n(t))}_i=1,…,k in logarithmic scale, i.e.,
(x_i, y_i) = (log_10(t_i-n+1), log_10(D_n(t))) ,
with i=1,…,k.
In order to neglect the initial transient regime, but still have enough points for a sufficiently significant fit, we select only the last 100 of such points.
We hence look for the best fit of {(x_i, y_i)}_i=k-100+1,…,k by optimizing the linear function y = a + b x, with a > 0, using the tool of the Python package <cit.>.
If a and b are the best parameters, then the power-law fit of the Heaps' law is D_n(t) ≈ 10^a t^b, that is, the nth-order Heaps' exponent is approximated by the slope b of the fit.
§.§ Urn Model with (Semantic) Triggering
The Urn Model with Triggering (UMT) is a generative model of a discovery process, producing a sequence of extractions of balls of various colors, representing different events, from an urn.
First introduced in Ref. <cit.>, it successfully reproduces the main features of empirical discovery and innovation processes <cit.>.
The UMT can be thought as an extension of Pólya Urn processes <cit.> that includes the concept of adjacent possible <cit.> in the way a novelty can trigger further ones <cit.>.
Differently from the classic urn of Pólya in which only balls of existing colors can be added to the urn, the UMT features a growing number of colors, that is, the set of possible events expands together with the exploration process. It is hence the process itself that shapes the content of the urn by reinforcing elements already discovered and adding new possibilities.
Supposing that the urn initially contains N_0 balls of different colors, the UMT works as follows. At each discrete time-step t, a ball is randomly drawn from the urn with uniform probability, and its color is marked in a temporally-ordered sequence of events 𝒮 at position t. The extracted ball is then put back in the urn together with other ρ copies of the same color, in a rich-get-richer manner <cit.>. This mechanism ensures that frequently adopted items, visited places, or exploited concepts will be more and more likely to be adopted, visited, or exploited in the future.
Furthermore, if the color of the extracted ball has never appeared before in 𝒮, this event is considered to be a novelty. As a consequence it triggers new possibilities, represented by the addition of ν+1 balls—each of a new different color—into the urn. This triggering mechanism thus ensures the expansion of the space of possibilities.
In a different version of the model, the Urn Model with Semantic Triggering (USMT), the sequences produced contain semantic correlations between consecutive extractions, as seen in the data <cit.>.
The UMST works similarly to the UMT, but with the introduction of semantic groups. In particular, at each triggering event, supposing that the triggering color belongs to the group A, the new ν+1 colors are assigned to a common new group B, semantically related to the triggering color.
Therefore, a color i of label A is semantically related to all other colors of label A (siblings), the color that triggered the addition of A in the urn (parent), as well as all colors of label B that have been triggered by i (children).
Taking this into consideration, at each extraction, the probability to extract each color changes depending on a fixed parameter η∈ [0, 1]. A ball has weight 1 if its color is semantically related to the one extracted on the previous time-step, otherwise it has weight η.
Notice that we can recover the original UMT by simply considering η = 1.
As shown in Ref. <cit.>, the effect of N_0 is negligible at large times.
For simplicity, we thus consider N_0 = 1 in our simulations of both UMT and UMST.
§.§ Edge-Reinforced Random Walk
Given a weighted connected graph G=(𝒱, ℰ) with N=|𝒱| nodes and M=|ℰ| links, the Edge-Reinforced Random Walk (ERRW) is a dynamical process that reinforces the weights of the visited edges in ℰ, leading to Heaps' laws <cit.>.
The weights of the links in the networks quantify the strength of the relationship among nodes, and are encoded in a time-varying adjacency matrix W^t ≡{w_ij^t}. This matrix features non-zero entries w^t_ij when at time t the link connecting node i and node j is different from zero. Let us assume that at time t=0 each link (i,j) ∈ℰ has weight w_ij^0 = 1, while all other weights are set to zero.
At each time step, a walker at node i walks to a neighboring node j with a probability that is proportional to the weight of the corresponding link, i.e., ℙ(i → j) = w_ij^t/∑_l w_il^t.
After moving to the chosen node j, a reinforcement ρ is added to the weight of the traversed edge (i,j), i.e., w_ij^t+1 = w_ij^t + ρ.
Given an underlying structure, the ERRW can generate sequences of visited nodes associated to a different pace of discovery by tuning the reinforcement parameter <cit.>. The interplay between structure and dynamics means that different structures might require different values of the reinforcement parameter to reach the same pace of discovery (Heaps' law).
For example, higher values of ρ must be chosen for a graph with a higher average degree. This is similar to what happens in the UMT, in which we need higher values of the reinforcement parameter ρ to obtain the same pace of discovery as we increase the triggering parameter ν.
§.§ Edge-Reinforced Random Walk with Triggering
In this manuscript we propose a generative model of a discovery process based on the exploration of a growing network, i.e., the Edge-Reinforced Random Walk with Triggering (ERRWT), which can be considered as a UMT-inspired extension of the ERRW.
For this model, any initial connected network G^0 = (𝒱^0, ℰ^0) with N^0 = |𝒱^0| ≥ 1 nodes and M^0=|ℰ^0| links can be used. Let us suppose that the nodes of the graph are indexed, that is, 𝒱^0 = {1, 2, …, N_0}.
Similarly to the ERRW, we assume that all initial links (i,j) ∈ℰ^0 have weight w_ij^0 = 1.
The initial node to start the exploration process is randomly selected from 𝒱^0.
We let the graph evolve during the process, adding new nodes and links. Let G^t = (𝒱^t, ℰ^t) be the graph at time t.
The structure of the growing network is encrypted in the time-varying weighted adjacency matrix W^t ≡{w_ij^t}, where w_ij^t represents the weight of the link (i, j) at time t. We assume here that G^t is an undirected graph, so the matrix W^t is symmetric, and any variation of w_ij^t affects w_ji^t too.
Supposing that at time t the ERRWT is positioned on node i of G^t, the model obeys to the following rules.
* Choice of next node. The ERRWT randomly moves to a neighbouring node j of the current node i. The probability to move to node j depends on the weight of the outgoing links of i, i.e.,
ℙ(i → j) = w_ij^t/∑_l w_il^t.
* Edge reinforcement. The weight of the chosen edge (i,j) is reinforced by ρ, that is,
w_ij^t+1 = w_ij^t + ρ.
* Edge triggering. If the walker never traversed the chosen edge (i,j) before this time, i.e., it is a new link, then ν_2+1 new possible links are added to the network. These links are connections of unitary weight between j and previously visited nodes l=l_1, …, l_ν_2 in 𝒱^t, for which the link (j, l) has never been traversed by the walker. If one of these edges already exists in the space of possibilities, its weight is reinforced by one more unit, otherwise, it is added to ℰ^t+1.
In other words, we have
w_jl^t+1 = w_jl^t + 1, l = l_1, …, l_ν_2 | l old, (j,l) new.
* Node triggering. If the walker never visited the chosen node j before this time, i.e., it is a new node, then ν_1 + 1 new nodes are added to the network; these are connected to node j with unitary weights.
Mathematically, we have
𝒱^t+1 = 𝒱^t + {l}_l=|𝒱^t| + 1, …, |𝒱^t| + ν_1+2
w_jl^t+1 = 1, l=|𝒱^t| + 1, …, |𝒱^t| + ν_1+2.
Notice that if the chosen node j is new, then also the traversed edge (i,j) is necessarily new as well. Therefore, in this case there is also a triggering of ν_2+1 edges from j to other previously visited nodes, as described before.
Finally, in this manuscript, we let G_0 be a small graph that emulates the triggering mechanism introduced, shown in Fig. <ref> in SI.
This is a regular tree with branching parameter ν_1 + 1 and 2 levels, where the leaves are considered new, while all other nodes have already triggered.
In other words, a root node has triggered ν_1 + 1 nodes connected to it, and again these nodes have also triggered each ν_1 + 1 other nodes.
Therefore, we initially suppose that the triggered nodes, which are ν_1 + 2 in number, are all known to the walker at the start of the simulation, and do not trigger again when later explored.
Moreover, we assume that all links are new to the walker and have unitary weight.
This initialization makes sure that in the initial stages of the simulation there are enough possible links between already known nodes.
As we show in Sec. <ref> in SI where we test different initial graphs, the initialization procedure only affects thermalization times, and becomes irrelevant asymptotically.
§.§ Data and code availability
The data used in this manuscript is publicly available at Refs. <cit.>.
All the code used to download, process and analyse the data and the models can be found at Ref. <cit.>.
§.§ Acknowledgements
We wish to thank Francesca Tria for the insightful comments and feedback about the manuscript.
A.P. and V.L. acknowledge support from the PNRR GRInS Project.
I.I. acknowledges support from the James S. McDonnell Foundation 21^st Century Science Initiative Understanding Dynamic and Multi-scale Systems - Postdoctoral Fellowship Award.
All computations have been performed via the High Performance Computing (HPC) cluster provided by Queen Mary University of London <cit.>.
§.§ Author contributions
G.D.B, I.I., A.P., and V.L. designed the study. A.P. performed a preliminary investigation. G.D.B. carried out the data collection and performed the numerical simulations. G.D.B., A.B., and G.D.M. carried out the analytical calculations. G.D.B, A.B., I.I., and V.L. wrote
the manuscript.
All authors contributed to analyze the data, discuss the results, define the proposed model, and revise the manuscript.
§.§ Competing interests
The authors declare no competing interests.
§.§ Additional information
Supplementary Information is attached to this manuscript.
Supplementary Information for
“The dynamics of higher-order novelties"
§ HIGHER-ORDER HEAPS' EXPONENTS IN THE DATA SETS
§ HEAPS' EXPONENTS IN SIMULATIONS OF THE URN MODEL WITH SEMANTIC TRIGGERING
The Urn Model with Triggering (UMT) features a triggering mechanism for the growth of the adjacent possible <cit.>. In particular, whenever a new color is drawn for the first time, ν+1 new colors are triggered and added into the urn.
Together with the reinforcement mechanism introduced in Polya's urn <cit.>, the UMT manages to reproduce various features of innovation processes, including the Heaps' law.
In particular, varying the parameters, the UMT produces different rates of discovery, which can be measured by the power-law exponent β_1 of the Heaps' law. According to analytical results on the asymptotic Heaps' exponent, we have that β_1 →ν/ρ. We check if this relation holds true at finite times in Fig. <ref>(a), where we show the scatter plots between ν/ρ and the fitted value of β_1 for simulations of the UMT with ρ = 20 and ν = 1, …,20, run for T=10^5 time steps. Each point refers to a different simulation, and we analyze 100 simulations for each set of parameters.
We notice how the relationship holds true in most cases, although the fitted values are less than the theoretical ones, especially for high values of ν/ρ.
We repeat this check also for higher-order Heaps' exponents in Fig. <ref>(b-c), finding that also in this case there is not so much difference between the theoretical value ν/ρ and the fitted β_2 and β_3, if only that the points in the plot are slightly higher than the bisector.
We repeat the same analysis for the Urn Model with Semantic Triggering (UMST), which introduces semantic triggering between the colors in the urn. In particular, two colors are considered semantically related if they have been triggered by the same color (siblings) or if one has triggered the other (parent and child).
Then, whenever a new color needs to be extracted, a ball of a certain color in the UMST has a different weight depending on the semantic relationship with the previous color. If the two colors are related, then the ball has weight 1, otherwise it gets weight η≤ 1.
Analytical results on the Heaps' law from the SI in Ref. <cit.> show that the asymptotic Heaps' exponent is found between ην / ρ and min(1, ν/ρ). We test this in Fig. <ref>(d), where we show the scatter plots between ην/ρ and the fitted value of β_1 for simulations of the UMST with ρ = 4 and ν = 1, …,20.
We see that for low values of the ην/ρ the value of β_1 corresponds to the theoretical lower bound.
However, starting from about ην/ρ = 0.2 there start to be simulations in which the value of β_1 goes abruptly up to 1. Notice that for these values, we have that ν/ρ = 2.
Interestingly, up to ην/ρ = 0.3 and sometimes up to ην/ρ = 0.4, there are both simulations with Heaps' exponent β_1 ≈ην/ρ and others with β_1 ≈ 1, but almost none in between. After that, there remain only simulations with linear Heaps' law.
We repeat the analysis for higher-order Heaps' exponents, finding the same behavior.
In Table <ref> we also report the number of simulations with either of the two behaviors.
Notice how the number of simulations with β_1 ≈ 1 increases with higher values of ν.
This analysis shows the inadequacy of the UMST to reproduce the whole spectrum of paces of discovery.
In fact, we are not able to obtain Heaps' exponents between 0.4 and 0.9 with η = 0.1.
Moreover, if we knew that the Heaps' exponent lies in between these two bounds, simulations actually only produce exponents very close to these two bounds.
The higher the theoretical value ην/ρ, the higher the chance of having a Heaps' exponent close to 1.
A possible explanation of why this could happen lies on the way semantic triggering happens.
In the UMST, indeed, when a color is drawn for the first time, ν+1 balls of new colors are added to the urn, and they become semantically connected to the triggering color. Then, the probability to draw a ball of a color semantically close to the previous one is 1/η = 10 times higher with respect to balls of other colors.
This brings about two possible scenarios.
On the one hand, if a small cluster of colors is highly reinforced in the beginning of the simulation, after one of them is drawn it is very likely that another of these colors is extracted in the next time step.
On the other hand, if a new color is drawn, since it is highly probable to move to a semantic close color and almost all of them are new, if ν is high enough the next extracted color is also almost surely new.
Then, once inside one of the two scenarios, it is very unlikely to break the loop, producing the two groups of Heaps' exponent we observe.
This also explains why the likelihood of being in the linear case increases with ν, even though the two behaviors can coexist in the same set of parameters.
Finally, this is also confirmed by simulations with higher number steps—we tested with 10^7 steps—, which show the same results, indicating that the behavior has already reached a stationary state.
§ ANALYTIC RESULTS FOR HIGHER-ORDER HEAPS' EXPONENTS IN UMT SIMULATIONS
In this section we provide a complete analytical analysis of the higher-order Heaps' laws for the Urn Model with Triggering (UMT).
Let us consider an urn with parameters ρ and ν and initially composed by N_0 ≥ 1 balls of different colors.
§.§ First-order Heaps' law
The evolution of the number D_1(t) of different colors that have appeared in the first t positions of the sequence 𝒮 is ruled by the following master equation:
D_1(t+1) = D_1(t) + ℙ(ℕ^(t+1))
= D_1(t) + N_0+vD_1(t)/N_0+ρ t+(v+1)D_1(t),
where ℕ^(t+1) is the event of drawing at time (t+1)
a ball of a color that has not been observed before.
Its probability ℙ(ℕ^(t+1)) can be expressed as the number of colors in the urn yet to be discovered, N_0 + (ν + 1) D_1(t) - D_1(t), divided by the total number of balls available at time t in the urn.
In the long time limit,
Eq. (<ref>)
can be approximated by a differential equation, which leads to an analytical expression for D_1(t) (see Refs. <cit.> for the analytical calculations):
dD_1(t)dt=N_0+vD_1(t)N_0+ρ t+(v+1)D_1(t)
D_1(0) = 0
D_1(t) t→∞≈
b t^β_1 if ν < ρ,
b tlog t if ν = ρ,
b t if ν > ρ,
where β_1 = ν / ρ and b is a constant depending on ν and ρ.
In other words, in the sublinear case ν < ρ, the Heaps' law is analytically verified, with asymptotic exponent β_1 = ν/ρ <cit.>.
§.§ Second-order Heaps' law
In order to write down an equation similar to Eq. (<ref>) for the the number D_2(t) of different pairs that have appeared in the sequence 𝒮_2 of length t, i.e.,
dD_2(t)dt =
ℙ(“The t-th pair is new”) ,
D_2(0) = 0,
we need to calculate the probability to observe a new pair.
However, differently from Eq. (<ref>), such a probability depends not only on the total number of balls and on the number of extracted colors, but also on the number of balls of each extracted color.
Notice that the t-th pair (x_1,x_2)
of 𝒮_2 is composed by the color x_1 drawn at time t in 𝒮 and the color x_2 drawn in the next time step.
Hence, there are three separate events
in which the t-th pair (x_1,x_2) is a novelty in 𝒮_2:
the event 𝔸 in which x_1 is a novelty, i.e. it appears for the first time in the sequence 𝒮 at time t;
the event 𝔹 in which x_1 is not a novelty but x_2 is a novelty;
the event ℂ in which both colors x_1 and x_2 are not novel, but the combination (x_1,x_2) appears for the first time.
Consequently, the probability that the t-th pair is new is equal to the sum of the probabilities of such events. Using Eq. (<ref>),
for large values of t
the probability of event 𝔸 can be written as
ℙ(𝔸) = ℙ(ℕ^(t)) = dD_1(t)dtt→∞≈ bβ_1 t^β_1-1.
Similarly, denoting with ℕ^(t) the opposite event of ℕ^(t),
the probability of event 𝔹 reads
ℙ(𝔹) ≈ℙ(ℕ^(t))
ℙ(ℕ^(t+1))
≈(1-dD_1(t)dt)
dD_1(t+1)dt
≈ bβ_1 t^β_1-1 = ℙ(𝔸),
where we have disregarded infinitesimals of lower order.
Thirdly, we can compute the probability of the event ℂ by calculating the probability that each possible pair of old colors is a novelty in this time step.
Since the number of old colors up to time t is D_1(t), indicating with ℂ_i,j^(t) the event in which i and j are two already extracted colors and their pair (i,j) is a novelty at time t in 𝒮_2, we can write:
ℙ(ℂ) ≈ℙ(ℕ^(t))
ℙ(ℕ^(t+1))
ℙ( ⋃_i,j = 1^D_1(t)ℂ_i,j^(t))
≈( 1- bβ_1 t^β_1-1)^2 ℙ(⋃_i,j = 1^bt^β_1ℂ_i,j^(t))
≈∑_i,j = 1^bt^β_1ℙ(ℂ_i,j^(t)).
The last equality in Eq. (<ref>) holds true because for any (i_1,j_1)≠(i_2,j_2) we have ℂ_i_1,j_1^(t)∩ℂ_i_2,j_2^(t) = Ø, since only one pair can be extracted at each time step, and we have disregarded lower infinitesimals.
Let us now concentrate on computing the probability of ℂ_i,j(t).
Defining the event 𝔼_ij^(τ) = “pair (i,j) appears (not necessarily for the first time) in the sequence at time τ", we can rewrite ℂ_i,j^(t) as
ℂ_i,j^(t) = 𝔼_ij^(1)∩𝔼_ij^(2)∩⋯∩𝔼_ij^(t-1)∩𝔼_ij^(t),
where we denote with 𝔼_ij^(τ) the opposite event of 𝔼_ij^(τ).
We can hence compute its probability as
ℙ(ℂ_i,j^(t))
=
ℙ(
𝔼_ij^(1)∩𝔼_ij^(2)∩⋯∩𝔼_ij^(t-1)∩𝔼_ij^(t))
= ℙ(
𝔼_ij^(1))
ℙ(𝔼_ij^(2) | 𝔼_ij^(1))
⋯ℙ(𝔼_ij^(t-1) | 𝔼_ij^(1)∩⋯∩𝔼_ij^(t-2))
ℙ(𝔼_ij^(t) | 𝔼_ij^(1)∩⋯∩𝔼_ij^(t-1)).
First, we notice that we can simplify the expressions in Eq. (<ref>), since
ℙ(𝔼_ij^(τ) | 𝔼_ij^(1)∩⋯∩𝔼_ij^(τ-1))
=
ℙ(𝔼_ij^(τ) | 𝔼_ij^(τ-1)).
This equality in Eq. (<ref>) holds true because, the probability of extracting the pair (i,j) at time τ can only be influenced by what has happened at time (τ-1), disregarding all previous times.
Without loss of generality, let us index the colors in the urn in the same order they first appeared in the sequence, i.e., let us suppose that the i-th color has appeared at time t_i, with t_i+1 > t_i, for i = 1, 2,…, D_1(t).
Let us also suppose that the rate at which a new color appears is given exactly by the approximated solution given by Eq. (<ref>). Then, it would be
i=D(t_i) ≈ b t_i^β_1 t_i ≈(ib)^1/β_1.
With Eq. (<ref>) we are assuming that the behaviour of D_1(t) at finite times can be approximated with the asymptotic one, and that colors appear deterministically at these expected moments. Even though strong, this assumption makes sense if we consider that, as it has been observed before, there is a good correspondence between this analytical solution and simulations at finite times.
Moreover, we will confirm a posteriori the suitability of this assumption since, as we will see, there is correspondence between the analytical solution of D_2(t) we obtain here and and the results of model simulations.
Let us now define n_i(t) as the number of times the color i has appeared before time t, supposing it has first appeared at time t_i ≤ t.
If 𝔼_i^(t) = “i appears at time t" (not necessarily for the first time), then we have that
dn_idt = ℙ(𝔼_i^(t)). Thus, we can write:
dn_i(t)dt =
ρ n_i(t) + 1N_0 + a D(t) +ρ tt→∞≈n_it,
n_i(t_i) = 1,
n_i(t) t→∞≈tt_i if t≥ t_i ,
n_i(t) = 0 if t < t_i ,
dn_i(t)dtt→∞≈1t_i = (bi)^1/β_1 if t≥ t_i ,
dn_i(t)dt = 0 if t < t_i .
Let us observe that under these assumptions dn_i/dt is actually constant in time, depending just on t_i.
Then, supposing that the number of balls n_i(τ), n_j(τ) of the two colors in the urn follows exactly Eq. (<ref>), we can calculate the probability of 𝔼_ij^(τ) as
ℙ(𝔼_ij^(τ)) =
ℙ(𝔼_i^(τ)) ℙ(𝔼_j^(τ+1)) =
dn_i(τ)dτdn_j(τ+1)dτt→∞≈1t_i t_j if τ≥max(t_i, t_j-1),
0 if τ < max(t_i, t_j-1).
Furthermore, if τ≥max(t_i, t_j-1), we can write
ℙ(𝔼_ij^(τ)∩𝔼_ij^(τ-1))
=
ℙ([( 𝔼_ij^(τ+1)∩𝔼_ij^(τ)) ∩𝔼_j^(τ)] ∪[(𝔼_ij^(τ+1)∩𝔼_ij^(τ)) ∩𝔼_j^(τ)])
=
ℙ([𝔼_i^(τ)∩𝔼_j^(τ+1)∩𝔼_i^(τ-1)∩𝔼_j^(τ)] ∪[𝔼_i^(τ)∩𝔼_j^(τ+1)∩𝔼_j^(τ)])
=
ℙ(𝔼_i^(τ)∩𝔼_j^(τ)) ℙ( 𝔼_j^(τ+1)) ℙ( 𝔼_i^(τ-1)) +
ℙ(𝔼_i^(τ)∩𝔼_j^(τ)) ℙ(𝔼_j^(τ+1))
=
δ(i,j) 1t_i1t_j(1-1t_i) + (1-δ(i,j))1t_i1t_j.
Therefore, we get
ℙ(𝔼_ij^(τ) | 𝔼_ij^(τ-1))
= ℙ(𝔼_ij^(τ)∩𝔼_ij^(τ-1))/ℙ(𝔼_ij^(τ-1)) = δ(i,j) 1t_i t_j(1-1t_i) + 1-δ(i,j)t_i t_j/1-1t_i t_j
= δ(i,j) (1-1t_i) + (1-δ(i,j))
/t_i t_j - 1 =
δ(i,j) (1-1t_i)/t_i t_j - 1 +
(1-δ(i,j))1/t_i t_j - 1
= δ(i,j) 1/t_i^2 + t_i +
(1-δ(i,j))1/t_i t_j - 1 .
In the following of this discussion, we make the following approximation:
δ(i,j) 1/t_i^2 + t_i +
(1-δ(i,j))1/t_i t_j - 1≈1t_i t_j.
Because of Eq. (<ref>), the approximation in Eq. (<ref>) implies in Eq. (<ref>) that
ℙ(𝔼_ij^(τ) | 𝔼_ij^(τ-1))
≈1t_i t_j if τ≥max(t_i, t_j-1)
0 if τ<max(t_i, t_j-1)
ℙ(𝔼_ij^(τ) | 𝔼_ij^(τ-1)) ≈ℙ(𝔼_ij^(τ)),
which is equivalent to assume that 𝔼_ij^(τ) and 𝔼_ij^(τ-1) are statistically independent, i.e. that the extraction of a certain pair (i,j) at time τ is independent of its extraction at the previous time (τ-1).
Therefore,
using Eq. (<ref>), Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>),
the probability of the event ℂ_ij^(t) that the pair (i,j) is extracted at time t for the first time can be approximated to
ℙ(ℂ_ij^(t))
=
ℙ(
𝔼_ij^(1))
ℙ(𝔼_ij^(2))
⋯ℙ(𝔼_ij^(t-1))
ℙ(𝔼_ij^(t))
=
∏_τ = 1^t-1ℙ(𝔼_ij^(τ)) ℙ(𝔼_ij^(t))
=
∏_τ = max(t_i,t_j-1)^t-1(1-ℙ(𝔼_ij^(τ))) ℙ(𝔼_ij^(t))
= (1-1/t_i t_j)^t-max(t_i, t_j-1)1/t_i t_j,
which can be used in Eq. (<ref>) to obtain an approximated expression for the probability of event ℂ, i.e.,
ℙ(ℂ) ≈∑_i,j = 1^bt^β_1ℙ(ℂ_ij^(t))
≈∑_i,j = 1^bt^β_1(1-1/t_i t_j)^t-max(t_i, t_j-1)1/t_i t_j.
Summing up, by inserting Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>) into Eq. (<ref>), we get the following differential equation for the number of new pairs in time in the UMT:
dD_2/dtt→∞≈ 2bβ_1 t^β_1-1
+ ∑_i,j = 1^bt^β_1(1-1/t_i t_j)^t-max(t_i, t_j-1)1/t_i t_j_𝒞(t).
In order to have an estimate of 𝒞(t), let us approximate the sum with the related integral:
𝒞(t) t→∞≈∫_1^bt^β_1∫_1^bt^β_1(1-1/t_x t_y)^t-max(t_x, t_y-1)1/t_x t_ydxdy.
This way, using the change of variables u=t_x = (xb)^1/β_1, v = t_y = (y/b)^1/β_1, we get
𝒞(t) t→∞≈
(bβ_1)^2 ∫_1/ρ - ν^t ∫_1/ρ - ν^t (1-1/u v)^t-max(u, v-1)dudv/(u v)^2-β_1,
where we have substituted the initial value (1b)^1/β_1 = 1ρ - ν, since b=(ρ-ν)^β_1 when ν < ρ, that is in this case <cit.>.
Moreover, considering that u and v represent time variables, with τ∈ (1,t), since between t=0 and t=1 there are no colors extracted yet, we can change the lower integral border to 1,
i.e.,
𝒞(t) t→∞≈
(bβ_1)^2 ∫_1^t ∫_1^t (1-1/u v)^t-max(u, v)dudv/(u v)^2-β_1,
where we have also simplified the exponent in the integrand, so that we can more easily calculate it as
𝒞(t) t→∞≈
2 (bβ_1)^2 ∫_1^t ∫_1^u (1-1/u v)^t-ududv/(u v)^2-β_1.
We numerically solve the integral in Eq. (<ref>) on specified points t_i using the command of Mathemathica <cit.>. The points t_i have been chosen on a fine logarithmically spaced grid of N=1601 points 1 = t_0 < t_1 < ⋯ < t_N = 10^16.
By plugging the numerical approximation 𝒞(t_i) into Eq. (<ref>), we also obtain a numerical approximation of dD_2/dt in these points.
We also obtain an analytical approximation of dD_2/dt by fitting a function of the type a t^b+c/(d+log_2(t)) using (in Python's package ), where the minimization of the error has been done in logarithm scale.
Finally, integrating Eq. (<ref>) over t, we obtain a solution for D_2(t).
Again, we are not able to solve this integral analytically, so we solve it numerically using the analytical fit of dD_2/dt. In particular, we integrate using Python's command in the package.
We find that the numerical integration for D_2(t) can also be fitted by a function of the type a t^β_1+c/(d+log_2(t)).
To sum up, we have derived a solution of Eq. (<ref>) of the type
D_2(t) ≈ a t^β_2, with β_2 = β_1 + c/d+log(t) ,
where a, c, and d depend on the parameters ρ and ν.
Fig. <ref> shows that the analytical expression of β_2 we have found is in good agreement with the numerical simulations.
From left to right, we consider parameters ρ = 4 and ν = 1, 2, 3, and we run simulations until T = 10^7.
In each plot, continuous lines represent the 2nd-order Heaps' exponents of the power-law fits as a function of time t.
The continuous blue line is obtained by fitting the best parameters a, c and d that minimize the error between the points D_2(t) of the simulations with a function of the type a t^β_1 + c/d+log(t).
The continuous orange line instead represents the result of our analytical approach in Eq. (<ref>).
The expected value of β_1 = ν / ρ is represented as a horizontal dashed gray line.
Our results further confirm that 2nd-order Heaps' exponents differ from the 1st-order ones at finite times.
However, they also highlight that in the UMT the difference between β_2 and β_1 slowly decays in time.
§.§ Higher-order Heaps' law
Finally, we point out that an analytical solution for higher-order Heaps' exponents can also be obtained by induction, with assumptions similar to those used for the 2nd-order one.
For example, for the 3rd-order, we can repeat the same process as in Sec. <ref> to compute the probabilities of obtaining a new triplet.
In particular, supposing that
dD_1(t)dt≈ a_1 t^β_1-1, dD_2(t)dt≈ a_2 t^β_1 - 1 +c_2/d_2+log_2(t),
we can obtain a new triplet in the three following distinct cases.
(𝔸): when at time (t-1) a new pair is drawn, which happens with probability
ℙ(𝔸) = dD_2(t-1)dt≈ a_2 t^β_1 - 1 + c_2/d_2+log_2(t).
(𝔹): when at time (t-1) an old pair is drawn, and at time t a new color is drawn, which happens with probability
ℙ(𝔹) = (1-dD_2(t-1)dt)dD_1(t)dt≈( 1- a_2 t^β_1 - 1 +c_2/(d_2+log_2(t))) a_1 t^β_1-1≈
a_1 t^β_1-1
.
(ℂ): when at both times (t-1) and t an old pair and an old color are extracted, but the corresponding triplet has never appeared in the sequence before.
Following the same steps of the 2nd-order case, we get the probability
ℙ(ℂ) ≈∑_i,j,k=1^bt^β_1(1-1/t_i t_j t_k)^t-max(t_i, t_j, t_k)1/t_i t_j t_k≈
≈ (a_1)^3 ∫_1^t ∫_1^t ∫_1^t (1-1/u v w)^t-max(u, v-1, w-2)dudvdw/(u v w)^2-β_1≈
≈ 3! (a_1)^3 ∫_1^t ∫_1^u ∫_1^v (1-1/u v w)^t-ududvdw/(u v w)^2-β_1,
where 3! = 3·2·1.
Then, summing up Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>), the probability to have a new triplet can be approximated as
dD_3(t)dt≈
a_2 t^β_1 - 1 +c_2/d_2+log_2(t) +
a_1 t^β_1-1 +
3! (a_1)^3 ∫_1^t ∫_1^u ∫_1^v (1-1/u v w)^t-ududvdw/(u v w)^2-β_1.
In general, for the nth-order Heaps' law,
let us suppose by induction that all lower orders are known, i.e., for all orders k = 1,…,n-1, with n≥2, we have
dD_k(t)dt≈ a_k t^β_1 - 1 +c_k/d_k+log_2(t),
D_k(t) = ã_k t^β_1+c_k/d_k+log_2(t),
with a,c,d > 0. Then, following the same procedure used for the 3rd-order Heaps' law, the probability of extracting a new n-tuple is given by:
dD_n(t)dt≈dD_n-1(t)dt +
dD_1(t)dt
+ n! (a_1)^n ∫_1^t ∫_1^u_1⋯∫_1^u_n-1_n integrals(1-1/u_1 ⋯ u_n)^t-u_1du_1⋯du_n/(u_1 ⋯ u_n)^2-β_1,
which approximately gives
dD_n(t)dt≈ a_n t^β_1 - 1 +c_n/d_n+log_2(t),
D_n(t) = ã_n t^β_1+c_n/d_n+log_2(t).
§ ANALYTICAL DETAILS OF ERRWT MODEL
In this section we provide an analytical insight of the model proposed in this manuscript, the Edge-Reinforced Random Walk with Triggering, or ERRWT.
In particular, we refer to the definition of ERRWT given in Materials and Methods in the main manuscript, and try to build differential equations for the evolution of D_1(t) and D_2(t).
From now on, we omit the explicit time dependence of the variables, e.g., D_1 ≡ D_1(t), so that the mathematical expressions are easier to read.
In the following analysis we make an important simplification, that is, we do not consider an undirected update as defined the main text. Undirected update means that at any time a new link (i,j) is reinforced or triggered, the link (j,i) is updated as well; here we consider the directed version of the model (only the visited link (i,j) is updated).
We start from considering variables referring to single nodes: D_1i^in, D_1i^out,D_2i^in,D_2i^out represent respectively the number of times a new node is discovered arriving (in) in node i, and leaving (out) from node i, and the same for the number of times a new link is discovered arriving or leaving from node i. Notice that D_1i^in becomes 1 as soon as node i is visited for the first time.
These micro variables can be aggregated to obtain the macro variables D_1 and D_2, considering either in or out variables and summing over all the nodes:
D_1 = ∑_i D_1i^in = ∑_i D_1i^out D_2 = ∑_i D_2i^in = ∑_i D_2i^out
Let us now build differential equations for the evolution of the micro variables, which will be aggregated to obtain self-consistent equations for D_1 and D_2.
Let us consider the probability of exploring a new node starting from node i, i.e., the probability that the variable D_1i^out increases by 1.
On the one hand, the total weight of the links outgoing from node i is equal to
M_0i+ρ n_i + (ν_1+1) D_1i^in + (ν_2 + 1) D_2i^in,
where M_0i is the initial number of links connected with node i at time t=0, and n_i ≡ n_i(t) is the number of times node i has been visited up to time t. The other two terms refer to the new links triggered when arriving in node i. Indeed, when i is visited for the first time, (ν_1+1) links outgoing from i to new nodes are triggered. Moreover, whenever a link ending in i is traversed for the first time, (ν_2 + 1) new links from i to other explored nodes are triggered.
On the other hand the total weight of links connecting i and never explored nodes is equal to
M_0i + (ν_1+1) D_1i^in - D_1i^out,
i.e., the initial number of nodes connected to i yet to be discovered, plus the number of nodes triggered when discovering node i, minus the number of nodes already discovered starting from i.
These considerations make possible to write that
d D_1i^outdt = p(i,t) M_0i + (ν_1+1) D_1i^in - D_1i^outM_0i+ρ n_i + (ν_1+1) D_1i^in + (ν_2 + 1) D_2i^in,
where p(i,t) is the probability of being on node i at time t, which is a needed condition for D_1i^out to evolve.
Using the same argument, we can also write an equation for the evolution of D_2i^out:
d D_2i^outdt = p(i,t) M_0i + (ν_1+1) D_1i^in + (ν_2 + 1) D_2i^in - D_2i^outM_0i+ρ n_i + (ν_1+1) D_1i^in + (ν_2 + 1) D_2i^in
Notice that Eq. (<ref>) and Eq. (<ref>) cannot be obtained so easily in the undirected case. In fact, here we have implicitly assumed that any link in the adjacent possible that has never been visited before has weight 1. However, if the update is undirected, we may reinforce some link (j,i) never traversed before only because the walker might have visited the edge (i,j), making impossible to know the actual weight of never traversed links.
At this point we make another assumption in order to make the equations solvable. In particular, we assume a precise expression for the variable n_i.
In fact, as we have seen in Sec. <ref>, in the UMT, at least in the sublinear regime, we have n_i(t) ∼ t/t_i, where t_i is the first time item (node) i has been visited <cit.>. Exploiting the analogy between the UMT and the ERRWT model we assume that n_i(t) has the same behaviour.
We also checked numerically the validity of this assumption.
We have indeed measured the evolution of n_i(t) in simulations of the ERRWT, showing that the assumption is reasonable for very different values of the parameters ν_1 and ν_2, as shown in Fig. <ref>.
Further notice that p(i,t) = dn_i(t-1)/dt ≈ 1/t_i, since the probability of being on i at time t is equal to the probability to move to node i in the previous time step. With all these elements we can rewrite Eq. (<ref>) and Eq. (<ref>) as
d D_1i^outdt≈1t_iM_0i + (ν_1+1) D_1i^in - D_1i^outM_0i+ρtt_i + (ν_1+1) D_1i^in+ (ν_2 + 1) D_2i^in
and
d D_2i^outdt≈1t_iM_0i + (ν_1+1) D_1i^in + (ν_2 + 1) D_2i^in - D_2i^outM_0i+ρtt_i + (ν_1+1) D_1i^in + (ν_2 + 1) D_2i^in.
The last step before aggregating the equations is to further simplify the denominator. First notice that D_1i^in is a variable which can only take values 0 or 1, since an arriving node can result to be new only one time (this is not true for D_1i^out, which can be larger than 1).
So we can neglect it with respect to the term with D_2i^in, because also this can be larger than 1 and can go to infinity with time with a pace dependent on the parameters as we will see later. Finally, we assume D_2i^in≈ D_2 / t_i; this is a reasonable assumption given the fact that n_i(t) ≈ t/t_i. Indeed, if a node i is visited with a frequency depending on the inverse of t_i, it is reasonable to assume that also the number of new links traversed arriving in node i occurs with the same frequency as well.
We can finally aggregate the equations summing over all nodes i obtaining
a self consistent equation for the evolution of D_1:
d D_1dt = ∑_i=1^D_1d D_1i^outdt≈∑_i 1t_iM_0i + (ν_1+1) D_1i^in - D_1i^outρtt_i + (ν_2 + 1) D_2t_i≈∑_i M_0i + (ν_1+1) D_1i^in - D_1i^outρ t + (ν_2 + 1) D_2
= M_0 + (ν_1+1) D_1 - D_1ρ t + (ν_2 + 1) D_2≈ν_1 D_1ρ t + (ν_2 + 1) D_2,
where in the last approximation we have disregarded M_0 in the numerator since D_1(t)→∞ is the leading term in the numerator. Similarly for the 2nd-order Heaps' law we can write
d D_2dt = ∑_i=1^D_1d D_2i^outdt≈∑_i=1^D_11t_iM_0i + (ν_1+1) D_1i^in + (ν_2 + 1) D_2i^in - D_2i^outρtt_i + (ν_2 + 1) D_2t_i
= M_0 + (ν_1 + 1) D_1 + (ν_2 + 1) D_2-D_2ρ t + (ν_2 + 1) D_2≈(ν_1 + 1) D_1 + ν_2 D_2ρ t + (ν_2 + 1) D_2.
Notice that the initial structure of the network only enters in the equations through the constant M_0 ≡∑_i M_0i, and as we already said this term can be safely neglected with respect to the other variables. This means that the asymptotic behaviour of D_1 and D_2, and so the exponents β_1 and β_2, should not depend on the initial structure of the network. We checked this fact running simulations with different initial conditions and measuring the exponents β_1 and β_2, checking that we obtain similar result in all cases. The results of this analysis is shown in Fig. <ref>.
In particular, we consider regular trees with different number of levels, but same branching size.
The idea is that we start from a node and trigger nodes, adding new levels of the tree.
Therefore, the first initial network we consider is made by a root, considered triggered, connected to ν_1+1 new nodes.
The second one adds another level to the first one. Therefore, it is a regular tree with branching size (ν_1+1) and 2 levels. Here, the root and the first level are considered triggered, while the leaves are still new.
This structure is the same used in the simulations of the main text (see also Fig. <ref>).
Finally, the third one adds one more level, thus being much bigger than the previous ones.
In the panels we show only two sets of parameters, but we find comparable results for other choices of the parameters.
For both sets of parameters, we find that by increasing the number of levels (and hence the number of nodes and links) in the initial network, the higher-order Heaps' exponents slightly increase.
Moreover, the bigger the network, the longer we see a transient time in which there is a much higher Heaps' exponent. For example, see the blue line in Fig. <ref>(a), where we can clearly find the initial higher slope. Nevertheless, notice that after this period, the pace of discovery, i.e., the exponent, seems to be similar across different initial conditions, thus showing that the initial structure of the network is not relevant for the asymptotic behaviour of the ERRWT.
Now, using Eq. (<ref>) and Eq. (<ref>), we are able to work out an analytical expression for the two exponents β_1 and β_2.
Let us consider various cases.
First, assuming a sublinear regime for D_2 (so that it can be neglected with respect to t), in the large time limit we can further simplify the equations and get
d D_1dt≈ν_1 D_1ρ t,
d D_2dt≈(ν_1+1)D_1+ν_2 D_2ρ t .
Solving both of these equations, we obtain an explicit expression for the two exponents β_1 and β_2; in the sublinear case we hence have asymptotically
β_1 = ν_1ρ , β_2 = max(ν_1ρ , ν_2ρ)
From the expression of β_1 and β_2 in Eq. (<ref>), we get that the sublinear regime holds only if ν_1 < ρ and ν_2<ρ.
Before moving on to the other regimes, let us notice that β_2 is constrained to be at most equal to 2 β_1. This is because if the number of nodes available to explore in the network is O(N), then the number of available edges is O(N^2). This means that D_2 can at most grow as the square of D_1 in the large time limit, imposing a constraint on the related exponents.
Let us now consider the case in which D_2 grows linearly in time, but not D_1.
Notice that this can happen only provided that ν_1 > ρ/2; in fact, since β_2 is constrained to be smaller or equal than 2 β_1, then it would not be possible for β_2 to be equal to 1 if β_1 = ν_1 /ρ < 1/2.
This regime can be obtained substituting a linear expression for D_2 ∼ a t into Eq. (<ref>).
In this case, if we assume a sublinear behaviour for D_1, we can neglect the second term in the numerator, obtaining
dD_2dt≈ν_2 a tρ t + (ν_2 + 1) a t = ν_2 a ρ + (ν_2 + 1) a = a a = ν_2-ρ(ν_2 + 1),
thus showing that the condition for this regime to exist is ν_2 > ρ, otherwise the coefficient a would be negative.
Then, we can substitute D_2 = ν_2-ρ(ν_2 + 1) t into Eq. (<ref>), to get the actual value of β_1:
dD_1dt≈ν_1 D_1ρ t + (ν_2-ρ) t = ν_1 D_1ν_2 t β_1 = ν_1ν_2.
Therefore, D_1 keeps growing sublinearly provided that ν_1 < ν_2.
Notice that in this case there are no conditions on the value of ν_1, which can also be larger than ρ.
Reminding also the network constraint β_2 ≤ 2 β_1, we have that this regime holds provided that β_1 > 1/2, which means 2 ν_1 > ν_2.
Finally, there is one last regime, in which both D_1 and D_2 are linear, i.e., with exponents β_1=β_2=1.
Substituting the two linear expressions D_2 ∼ at and D_1 ∼ bt in Eq. (<ref>) and Eq. (<ref>), we obtain the following system of equations
dD_1dt≈ν_1 bρ + (ν_2 + 1) a = b
dD_2dt≈(ν_1 + 1)b + ν_2 aρ + (ν_2 + 1) a = a
from which we can work out the values of the two coefficients:
a = ν_1-ρ(ν_2 + 1) b= (ν_1-ρ)(ν_2 + 1)( ν_1-ν_2 )(ν_1+1),
which give the conditions ν_1 > ρ and ν_1 > ν_2 for this regime to hold.
This comes out from the fact that, as we have seen before, whenever ν_2>ν_1 we have a sublinear regime for D_1.
Summarizing the predicted exponents for the directed version of the model for any choice of the parameter ν_1, ν_2 and ρ, we have:
ν_2 < ρ, ν_1 < ρ β_1 = ν_1ρ, β_2 = min(max( ν_1ρ, ν_2ρ),2ν_1ρ)
ν_2 ≥ρ, ν_1 ≤ρ2 β_1 = ν_1ρ, β_2 = 2ν_1ρ
ν_2 ≥ρ, ρ2 < ν_1 < ν_2
β_1 = ν_1ν_2, β_2 = 1
ν_1 ≥ρ, ν_1 ≥ν_2
β_1 = β_2 = 1
The results above give us an analytical overview of a simplified version of the model, which can still provide the phenomenology we are interested in. In fact, with this analysis we still obtain a different behaviour for D_1 and D_2 with two different Heaps' exponents β_1 and β_2, which are controlled by the parameters ν_1 and ν_2, given ρ.
|
http://arxiv.org/abs/2307.04242v1 | 20230709184209 | Reconstructing Air Shower Parameters with MGMR3D | [
"P. Mitra",
"O. Scholten",
"T. N. G. Trinh",
"S. Buitink",
"J. Bhavani",
"A. Corstanje",
"M. Desmet",
"H. Falcke",
"B. M. Hare",
"J. R. Hörandel",
"T. Huege",
"N. Karastathis",
"G. K. Krampah",
"K. Mulrey",
"A. Nelles",
"H. Pandya",
"S. Thoudam",
"K. D. de Vries",
"S. ter Veen"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.IM"
] |
d[1]D..#1
h[1]D--#1
v×B
v×(v×B)
e-va-po-ra-tion
#1#1#1Appendix <ref>#1
#1Fig. <ref>
#1
(<ref>)1Eq. (<ref>)#1#2Eqs. (<ref>,<ref>)#1#1Table <ref>#1
X_ maxX_ z#1
MGMR3DKapteyn Institute, University of Groningen, Groningen, The NetherlandsAstrophysical Institute, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, BelgiumVrije Universiteit Brussel, Dienst ELEM, Brussels, BelgiumNikhef, Science Park Amsterdam, Amsterdam, The NetherlandsDepartment of Astrophysics/IMAPP, Radboud University Nijmegen, Nijmegen, The NetherlandsCWI, Centrum Wiskunde & Informatica, Amsterdam, The NetherlandsTU/e, Eindhoven University of Technology, Eindhoven, The NetherlandsNetherlands Institute for Radio Astronomy (ASTRON), Dwingeloo, The NetherlandsMax-Planck-Institut für Radioastronomie, Bonn, GermanyPhysics and Astronomy, University of California, Irvine, CA 92697-4575,U.S.AInstitut für Astroteilchenphysik, KIT, P.O. Box 3640, 76021, Karlsruhe, GermanyParticles and Fundamental Interactions Division,Institute of Experimental Physics, University of WarsawPhysics Education Department, School of Education, Can Tho University, Campus II, 3/2 Street, Ninh Kieu District, Can Tho City 94000, VietnamErlangen Center for Astroparticle Physics (ECAP), Friedrich-Alexander-Universität Erlangen-Nürnberg,
Nikolaus-Fiebiger-Straße 2, 91058 Erlangen, GermanyDeutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738 Zeuthen, GermanyKhalifa University, P.O. Box 127788, Abu Dhabi, United Arab Emirates[][email protected][][email protected]
Measuring the radio emission from cosmic ray particle cascades has proven to be a very efficient method to determine their properties such as the mass composition. Efficient modeling of the radio emission from air showers is crucial in order to extract the cosmic ray physics parameters from the measured radio emission.
MGMR3D is a fast semi-analytic code that calculates the complete radio footprint, i.e. intensity, polarization, and pulse shapes, for a parametrized shower-current density and can be used in a chi-square optimization to fit a given radio data. It is many orders of magnitude faster than its Monte Carlo counterparts. We provide a detailed comparative study of MGMR3D to Monte Carlo simulations, where, with improved parametrizations, the shower maximum is found to have very strong agreement with a small dependency on the incoming zenith angle of the shower. Another interesting feature we observe with MGMR3D is sensitivity to the shape of the longitudinal profile in addition to . This is achieved by probing the distinguishable radio footprint produced by a shower having a different longitudinal profile than usual.
Furthermore, for the first time, we show the results of reconstructing shower parameters for LOFAR data using MGMR3D, and
obtaining a resolution of 22 g/cm^2 and energy resolution of 19%.
Reconstructing Air Shower Parameters with MGMR3D
S. ter Veen
August 12, 2023
================================================
§ INTRODUCTION
When a high-energy cosmic particle impinges on the atmosphere of Earth, it creates an extensive air shower (EAS). The electrons and positrons in the plasma cloud at the shower front drift in opposite directions due to the Lorentz force caused by the geomagnetic field. Due to this acceleration by an Earth’s magnetic field and deceleration in interactions with air molecules a time varying transverse current is created.
This varying current emits radio waves <cit.> where the intensity pattern on the ground, the intensity footprint, depends on the variation of the current with height. There is another subdominant contribution to the radiation from the excess of negative charge accumulated at the shower front, known as the 'Askaryan effect' <cit.>. The penetration depth where the particle number reaches its maximum, , strongly depends on the specifics of the first interaction, which strongly correlates with the mass of cosmic ray primary. Different values of result in differences in the longitudinal variation of the currents which is reflected in the intensity of the radio footprint. Thus can be reconstructed on the basis of the footprint which allows for a determination of the mass composition of cosmic rays <cit.>.
The modeling of radio emission from EAS is generally performed with either microscopic or macroscopic formalisms.
In a microscopic formalism the emission is calculated for each particle as obtained from a Monte Carlo simulation of the EAS. The coherence of the signals emerges naturally in this approach.
ZHAires<cit.> and CoREAS<cit.> are the two most commonly used microscopic codes.
MGMR <cit.>, EVA <cit.> and their latest successor MGMR3D <cit.> are examples of macroscopic codes.
In this framework, the radiation field is derived from the Liénard-Wiechert potential <cit.> where the four-current is parametrized. The amplitude of the four-current is explicitly split into the charge component driving the charge excess emission and the transverse drift current generating the geomagnetic emission.
One advantage of MGMR3D is that it is computationally inexpensive and
produces radio profiles about four orders of magnitude faster than the Monte Carlo simulations. Another advantage is that it is fully deterministic in the sense that one can have control over the outputs by choosing exact shower parameters like the shower maximum and shape parameters of the longitudinal profile, contrary to the inherent randomness in Monte Carlo simulations. For these reasons, MGMR3D can be used to fit a reference radio footprint and obtain the corresponding longitudinal shower parameters that best reproduce the given profile through minimization techniques. There are other, more phenomenological approaches emerging like template synthesis<cit.>, radio morphing<cit.>, that also allow a fast calculation of the radio footprint.
In MGMR3D the charge-current cloud of the air shower is parametrized which necessarily approximates its full complexity.
In particular, the dependence on the energy of the particles forming this cloud is ignored, however, as the important particles in this cloud are relativistic, this is thought to be a reasonable approximation. In a prior publication <cit.>, the parametrization and the foundation of the MGMR3D framework were introduced.
In this follow-up work, we further investigate the performance of MGMR3D on ensembles of air showers and have refined the parametrization in an extensive comparative study with CoREAS. Most significantly, we have used MGMR3D to re-analyze measured data obtained with LOFAR. The MGMR3D-based analysis reproduces, within statistical significance, the results of an earlier analysis based on microscopic CoREAS calculations. MGMR3D offers thus a very CPU-efficient alternative to existing approaches for extracting shower parameters like from the radio footprint, and thus composition, of the original cosmic rays.
Notably, MGMR3D is also a strong tool to map atmospheric electric fields under thunderstorms. In a separate publication <cit.> a detailed study is presented of using MGMR3D for reconstructing atmospheric electric fields during thunderstorm conditions from the radio footprint of air showers.
This article is structured as follows- In Modeling we describe the improved modeling of the radiation profile. In <ref> and <ref>, comparisons between CoREAS with MGMR3D shower profiles are demonstrated, and the details of the results of fitting . We also present a correction formula to obtain the correct zenith angle dependency for as compared to CoREAS calculations. Such a correction is necessary since the penetration depth for which the coherent transverse current is maximal generally differs from the penetration depth for which the number of charged particles is maximal, .
We also report a study suggesting a strong correlation between showers with nonstandard shapes of longitudinal profiles and the fit quality of MGMR3D. This indicates a novel future prospect of extracting shower parameters regarding the shape of the longitudinal profile, in addition to , with radio technique using MGMR3D. This will in the end help gather a better understanding of the mass composition and hadronic models. In Section lofardata we have shown the results of reconstructing using MGMR3D on measured LOFAR cosmic ray data and compare to the existing reconstructed with LOFAR analysis method, as well as the reconstruction of shower core and energy.
§ MODELING RADIO EMISSION FROM EAS
Modeling
The charge and current distributions that drive the radio emission from an EAS is expressed as a four-current j^μ(t,x,y,h) where μ=0 denotes the time (charge) components, and μ=x,y,z denote the space (current) components.
The retarded Liénard-Wiechert potential for an observer at (t_o,x_o,y_o,z_o) in the shower plane with the retarded time t_r is
A^μ(t_o,x⃗_⃗o⃗)=∫ d^3 x⃗' j^μ(t_r,x⃗')/ D ,
where the retarded distance is
D= n√((-β t_o +h)^2 + (1-β^2 n^2)d^2) ,
where the distance between the observer and the point of impact of the core of the air shower is denoted by d, and the index of refraction is denoted by n.
Since for a cosmic-ray air shower the particles are concentrated in a relatively flat pancake-like structure moving with relativistic speeds, the four current is parametrized as
j^μ(t,x,y,h)=w(r)/r f(h,r) J^μ(t) .
DefCloud
The term w(r) / r in (<ref>) is the radial description of the plasma cloud, the second term f(h,r) is the current density of the shower front. These two are normalised such that J^μ(t) is the charge and current for a fixed time integrated over the complete plasma of the EAS. The radial dependence of the transverse
current is parametrized as
w(r)= N_w ζ(ζ + 1)^-2.5,
with ζ=r/R_0.
The function w(r) × r corresponds to the NKG function <cit.> for a fixed shower age s=2<cit.>. These parametrizations were studied and optimized by comparing to the results of CONEX-MC<cit.>. The definition of R_0 is similar to the Molière radius, but not the same as in this context it is a scaling parameter that describes the radial current profile and thus is referred to as radiation radius.
In the original formulation of MGMR3D the radiation radius was taken to be a constant. We observed that the optimum value for R_0 depends on the distance from to the shower core (D_), while fitting R_0 for different showers. We find that for distances smaller than 5 km R_0 is proportional to distance and reaches saturation with R_0=50 m for larger distances, independent of zenith angle. This is shown in mol_radius. This linear dependency at smaller D_ is now included in MGMR3D as R_0= 10 D_.
The current density at a distance h behind the shower front is parametrized as
f(h,r)=N_f η/e^√(η)+1.
where N_f is a normalisation constant. The parameter λ, folded in as η=h/ λ, accounts for the pancake thickness scaling and has a radial dependence. The radial dependence of the pancake thickness is described in a way that it is constant near the shower axis and increases linearly at distances away from the shower axis where particles tend to have less energy and thus lag behind. The parametrizations for the radial and pancake function were also studied and optimized with comparison to the results of CONEX-MC <cit.>.
The functions w and f that depends on the distance to the shower axis are normalized according to ∫_0^∞ w(r) dr=1 and ∫_0^∞ f(h,r) dh=1 ∀ r.
§.§ Parametrization of the currents
ParamCurr
The original parametrization of the charge cloud in MGMR3D, as described in <cit.>, was based on CONEX-MC simulations <cit.>. There were however important inconsistencies in the extracted shower parameters as well as in the observed radiation profile, when compared to CoREAS results. A reason behind these differences is that in the parametrization the energy distribution of the particles in the shower is not taken into account. Parameters like the drift velocity and charge excess are strongly dependent on the energy range of particles used to predict these averaged quantities. To mitigate these issues we revisit the parametrizations in this work by comparing the results of MGMR3D and CoREAS calculations for an ensemble of air showers. This leads to improved parametrizations, in particular for modeling the drift velocity (cf. ParamCurr) and for the longitudinal profile of the current, J^μ in (<ref>). The details of the comparison between MGMR3D and CoREAS are presented in Compare.
§.§.§ Transverse current
The transverse current is given by,
J⃗_⊥(t_s)=N_c(X_z) u⃗_⊥(),
where the transverse drift velocity is denoted as u⃗_⊥(),N_c is the number of charged particles at depth (X_z).
It should be noted that the penetration depth for is only indirectly related to the penetration depth of maximum transverse current, since the factor between the two, the drift velocity, depends on air density as well as the mean energy of the particles in the shower.
The drift velocity increases with increasing forces
acting on the charges. This becomes particularly important for large electric fields in thunderstorm clouds, and special treatment is required so that the
particles do not exceed the speed of light <cit.>. The transverse drift u⃗_⊥() is therefore expressed as u⃗_⊥()=cv⃗/√(1+v^2/v_0^2),
where the parameter v_0 is adjusted to the value 0.2, and v is taken proportional to the Lorentz force.
In the original parametrization used in <cit.> no dependence on air density was assumed in the parametrization of the drift velocity. We noted that a √(ρ) scaling was necessary to obtain agreement with the results of the CoREAS calculation.
We thus updated the formula for the drift velocity to read
v⃗(X)= c F⃗_⊥/F_t×a_t+1/-X_t/-X_t + a_t×√(ρ()/ρ())
Def-v
with X_t=50 g/cm^2, F_β=250 keV/c, and a_t=3.
F⃗_⊥ is the total transverse force acting on the particles, and for the air showers when no thunderstorm is present it only consists of the Lorentz force, F⃗_⃗⊥⃗=ev⃗_s ×B⃗, where v⃗_⃗s⃗ is the velocity of the shower front, e is the elementary charge, and B⃗ is Earth's magnetic filed. The second factor in (<ref>) takes into account the fact that the drift velocity depends on the penetration depth in the atmosphere, accounting for the changing mean energy of the shower particles.
It is good to mention that this parametrization becomes less accurate for the highest zenith angles, where an additional dependence on emission height is seen. This correction is not yet included in the code, which should therefore be used with caution when studying highly inclined showers above 60 degrees zenith angle. For the study reported in this article both simulated and recorded showers are well below this limit.
The physical interpretation of the √(ρ) scaling is not trivial. Interestingly, the drift velocity has the same form as the terminal velocity due to the macroscopic drag force acting opposite to the relative motion of any object moving in a fluid. The drag force of air is proportional to the square of the speed of the object. For a falling object in air the terminal velocity can be reached when the force due to gravity balances the drag force
mg=F_D= 1/2ρ C A v^2 ,
with C,A,v being the drag coefficient, area of the object and terminal velocity respectively. Solving for v results
v=√((2mg/ρ CA)).
The result can be generalized to situations where the object is accelerated by other forces. In the case of the electron drift velocity that would be the Lorentz force.
The equivalent of the drag force is actually due to the many elastic collisions of the relativistic electron in the shower front with neutral air molecules. A relativistic electron in the shower lives roughly a microsecond (300 meters)
before being stopped in a hard inelastic collision. Within
that microsecond, the electron actually undergoes more
than a million elastic collisions with particles in the air.
While this provides an intuitive understanding of the ρ^-1/2 scaling, the assumption that an electron plasma experiences the same drag as a macroscopic object is of course not easily justified. It is worth mentioning that in <cit.> a similar density dependence on the electric field amplitude of radio pulse was reported in a study for radio morphing method.
! recently we have noticed that this formulation assumes the drift velocity at Xmax is constant for all zenith showers, ignoring the altitude effect, not perfect for very inclined showers, however for the case in Lofar might get away with it as the zenith range is not extreme. Need a nice way to mention this.
using an updated value for the constant c.
§.§.§ Charge excess
The charge excess in the shower is given as, J_Q(z)= e N_c() ρ_c() where e is the charge of the electron and the proportionality factor is ρ_c() defined in the most recent form,
ρ_c() = J^0_Q 3 - -X_c/ + -X_c
× (1 - e^--X_c/2(-X_c)) ρ()/ρ_c√(ρ()/ρ_c)Def-chxcurr
where J^0_Q is a normalisation constant, ρ_c=0.06 g/cm^3, and X_c=50 g/cm^2. The first two factors in (<ref>) are inspired by comparing to the results of CONEX-MC simulations including simulations for highly inclined showers with zenith 65 degrees.
The last term including the square root dependency on density is inspired from the treatment of transverse current in (<ref>).
§.§ Parametrization of the longitudinal profile
ParamLP
There are two common ways to parametrize longitudinal profile, the number of charged particles at a depth , one is the Gaisser-Hillas formula <cit.>, the other is the R,L formula in <cit.>,
N^R-L_c()= N_max×(1 - R/L ( -))^R^-2 e^ - / R LDef-Nc-RL
N^G-H_c()= N_max×(-X_0/ -X_0)^ -X_0/Λ e^ - /Λ
Def-Nc-GH
where the number of particles at the shower maximum, N_max is taken proportional to the energy of the cosmic ray,
N_max= N_E E_cr . norm
The constant N_E is used as a norm factor when fitting the results of MGMR3D to data.
The main difference between the two parametrizations is that the parameters in (<ref>) are related to the physics of the shower such as the depth of first interaction while R and L in (<ref>) relate more directly to the rise and fall of the distribution <cit.>. These more general parametrizations provide the option to study effects of the longitudinal shape parameters other than on the radio footprint <cit.>.
In principle, either of these parametrizations can be used to describe the longitudinal profiles in MGMR3D. We have used (<ref>) throughout this analysis.
The intensity of the radio pulse depends on the energy of the cosmic ray which is treated as a normalisation factor, a proxy for the air shower energy, in MGMR3D
when a χ^2 fit to data is performed.
This normalisation factor was introduced in (<ref>).
Thus, when fitting the radio footprint as generated by CoREAS simulations for showers with a fixed energy, the normalisation factor should be constant, barring shower-to-shower fluctuations. In normvsdensity, we indeed show this is approximately constant, for showers at various zenith angles. These values also have a global normalisation which is constant for all showers.
§ STOKES PARAMETERS AS OBSERVABLES
stokes_obs
We investigate the radio footprint of an air shower using Stokes parameters since these capture the complete polarization structure of the radio pulse. Because the objective of the present work is to develop a scheme for data interpretation, we construct the Stokes parameters specific for the LOFAR frequency band, between 30 – 80 MHz band.
The Stokes parameters can be expressed in terms of the
complex observable E_i=E_i + iÊ_i, where E_i is the
electric field component in ê_ and ê_ directions which are by construction perpendicular to the propagation direction of the shower, and Ê_i is its Hilbert transformation <cit.> (in arbitrary units), as
I = 1/ N∑_0^n-1( | E|^2_i, + | E|^2_i,)
Q = 1/ N∑_0^n-1( | E|^2_i, - | E|^2_i,)
U +iV = 2/ N∑_0^n-1( E_i, E_i,^* ) .
We sum over the entire signal trace while calculating the values from CoREAS simulations.
The linear-polarization angle with the -axis, ψ, can be calculated directly from the Stokes parameters as ψ=1/2tan^-1 (U/Q). The relative amount of circular polarization is given by V/I and it can be interpreted due to a time lag between the peak of
the charge excess and transverse current pulses <cit.>.
§.§ Noise-error estimate on Stokes parameters
MGMR3D performs a fit of the input radio profile through a Levenberg-Marquardt
minimization procedure <cit.>, that is based on a steepest descent method. The reduced χ^2 of
the fit is defined as
χ^2=1/N_ndf ∑_a,f(f_c^a-f_m^a)^2/σ_f^a^2errdef
where f_c^a, and f_m^a are the different Stokes parameters calculated with CoREAS and MGMR3D respectively for antenna a, N_ndf
is the number of degrees of freedom, and σ_f^a is the error on the Stokes parameter.
It is important to note that when we are performing a model-to-model comparison here, the numerator in (<ref>)
does not have a noise contribution and the χ^2 can be << 1. For the sake of clarity we refer this as χ̃^2 throughout this paper to distinguish from standard χ^2.
In the present calculations, we calculated σ_f for the comparison with CoREAS as
σ_I^2 = Δ t/2( c ϵ2/Nσ_n I + 2/Nσ_n^2) = Δ_t/2(2 σ_n/N(c ϵ I_0+σ_n))
σ_Q^2 = Δ t/2( c ϵ 2/Nσ_n I + 2/Nσ_n^2 )
σ_U^2 = σ_V^2= Δ t/2(c ϵ 2/Nσ_n I + 2/Nσ_n^2) ,
errormodel
where N is the length of the trace and σ_n is the noise fluence per sample, c, ϵ are the natural constants - velocity of light and permittivity of air in S.I. units, and Δ t is the width of the time bins.
For measured cosmic ray data the value of the noise level σ_n is obtained from measuring a time window of the recorded signal trace where no significant signal is present. In the case of MGMR3D, the value is chosen such that it is a close representation of the measurement. the value is shown in mytable1.
§ COMPARISON TO COREAS SIMULATIONS
Compare
With the improved parmetrizations of the current profile as given in ParamCurr
we validate the performance of MGMR3D by fitting the radio footprint of showers simulated with CoREAS to that of MGMR3D. There is a range of parameters available in the framework of MGMR3D that can be tuned to achieve a good fit. We follow the approach where generic shower parameters, based on shower generality, are taken fixed, such as those given in mytable1, while others, in particular those describing the longitudinal profile of the shower (, the shower maximum, and E, the shower energy) are fitted for each shower.
CoREAS simulations are performed on a star-shaped layout of antennas with the center on the shower axis and 8 arms. Each arm contains 20 antennas, with a spacing of 25 m in the shower plane. The radio pulses are filtered between 30 – 80 MHz.
The results of each CoREAS simulation for the intensity I for all antennas of the grid, is fitted with MGMR3D using a steepest descent algorithm treating and E as free parameters. In these calculations, the core position is kept fixed to the center of the grid. In later applications to LOFAR data (lofardata) the core position is also treated as a free parameter
§.§ Single shower comparisons
The different panels in Stokes_lowzen and Stokes_highzen show the Stokes parameters for two showers coming in at a 26^∘ and 46^∘ zenith respectively. The top panels show the Stokes parameter as a function of antenna position for both MGMR3D and CoREAS and the bottom panels show the relative difference between the two models defined as ΔI= (I_c-I_m)/σ_I. The realistic error model described in (<ref>) is used.
All the plots show a common feature that the magnitude of ΔI varies with antenna positions and has zero crossings.
The magnitudes of the Stokes parameters depend on the azimuthal orientations of the antennas with respect to the core.
For example, along the v×B direction there is full linear polarization resulting Q/I=1. It deviates from unity for other directions,
due to a small contribution from the charge-excess emission.
Similarly, the circular polarization,
expressed by V/I, is small and azimuth angle dependent.
The Stokes parameters U and V for the two calculations are shown to agree well within 250 meters, while the differences increase at larger distances. These differences seem to point to an underestimate of the difference in emission heights between charge excess and transverse current radiation in MGMR3D.
Stokes_highxmax shows an example of a shower with a very large ≈ 950 g/cm^2 which results in a poor agreement between CoREAS and MGMR3D, such cases can be expected when the shower develops closer to the ground. Further details for such cases are discussed in simu_fit.
In the rest of this work, we concentrate on reconstructing the shower maximum using Stokes I. We restrict ourselves to I as it is the Stokes parameter that can most accurately be measured experimentally, and we have also noted that adding other Stokes parameters does not lead to any significant improvement in the reconstruction of air shower parameters.
§.§ Fitting the shower maximum
simu_fit
In this section, we report the results of reconstructing with MGMR3D by fitting an ensemble of CoREAS showers. This CoREAS library was produced for each detected shower in LOFAR, where at least 25 proton and 10 iron showers are simulated with the same energy and arrival direction obtained from a preliminary reconstruction for this shower <cit.>.
We have excluded showers with exceeding 750 g/cm^2 because for these showers the footprint becomes extremely small and MGMR3D does not provide a good agreement with CoREAS radio profiles.
you should probably give an example of such a poor fit, maybe in the previous section. You could also just include them. Do you limit the fit-range to distances where the signal is larger than the coreas noise?yes, normally this is regulated by the OBS_DIST parameter, an optimum value was chosen that worked for most of the showers, probably they need to be more tuned for very deep showers, but they are very small in number. okay, I will find an example. Sometimes, mgmr3d used to take too long such such a case and in the end gave an error
The radio footprints with MGMR3D are fitted to CoREAS with as a free parameter for each shower with arrival direction and energy same as CoREAS. As mentioned earlier, for CoREAS simulations the shower core positions are known, hence we do not fit the core positions. But for real data core positions become important fit parameters while obtaining the radio profile that best describes the data. This is discussed in detail in the next section.
We refer to the values obtained from CORSIKA as ^true and the reconstructed values as ^fit.
The results are shown in xmaxfit_coreas. This considers mixed primaries with proton and iron for various showers. The error calculated for the realistic noise model given in (<ref>) is used. We have applied a quality cut based
on the distance to from the ground. Details of this cut are explained in the following paragraphs. The black crosses are the points that are excluded by the cut.
A straight line is fit through the selected points, shown by the blue points.
It is evident that there is a very strong correlation between the reconstructed and the CoREAS truth values. The slope and intercepts of the fit are 0.98 and 19 respectively. Distribution of the deviation of ^fit from the fitting line, denoted by Δ X' is shown in the inset histogram of xmaxfit_coreas. This shows a resolution of 9.76 g/cm^2.
It is also worth mentioning that we have studied the fits on proton and iron showers separately and found no bias on primary particle type. The fit results are found to be almost identical, we have thus used combined showers for the rest of the analysis.
The shift in from the true value is defined as
Δ= ^fit - ^true. For the majority of the showers, Δ is independent of to the first order, as suggested by the near unity slope.
However, we have found a dependence on the shower zenith angle, as shown in zenith_correction, which includes the same showers as in xmaxfit_coreas. We see from the plot that there are a handful of outliers, a few in the positive direction of Δ and more in the negative direction. The positive ones will be discussed in the next section. The negative outliers appear to be from showers that are developed closer to the ground.
In order to obtain a clean parametrization to capture the relationship between Δ and zenith, we have used a cut on the outliers.
These outliers are excluded based on a cut on distance from the core of the shower on the ground to . We have chosen a conservative cut to accept showers with distance to > 3 km in the fit that captures the trend between Δ w.r.t. zenith, shown by the red curve. The excluded points are shown in black crosses. For showers that are developed closer to the observer there are systematic differences between MGMR3D and CoREAS (also shown in an radio LDF example in Stokes_highxmax), which could be attributed to the facts that for such showers more detailed parameters, like the dependence on the distance to the shower axis of the thickness and shape of the shower front, start to become important for the radio footprint, leaving room for more fine-tuning for specific showers with MGMR3D. Another important point is that, the general emission mechanism in MGMR3D involving coherence and farfield assumptions start to become less accurate when the emission is generated close to the antennas. However, for the majority of the showers the generic approximations hold and results with MGMR3D are in good agreement with COREAS.
The coefficients of the fit from zenith_correction are given in mytable2. This parametrization can be used as a correction factor to estimate the expected value from ^fit in general and is used while fitting LOFAR data to MGMR3D in section lofardata.
§.§ Sensitivity to shower shape parameters-R and L
It appears from zenith_correction that there are a few showers, where the fit from MGMR3D is overestimated significantly from their CoREAS truth values that are not affected by the distance to cut described in the previous section. In this section, we take a closer look at some of these cases.
It is found that these outliers have significantly larger χ̃^2 values than the other CoREAS simulations for the same shower angle and energy. We have ruled out the possibility of non-convergence of the fit, by studying the χ̃^2 surface for , which showed a clear global minimum for all cases. While probing other reasons for such differences, we have found that these showers have longitudinal profiles that differ considerably from the rest of the ensemble. These differences are observed in terms of the shape parameters - R and L as described in (<ref>). It appears that for these outliers the true R and L values, obtained from fitting the CORSIKA longitudinal profiles, are quite extreme compared to their central values. A zoom of the subset of showers containing the outliers are shown in LR_extreme, with their true R and L color coded. The trend demonstrates the correlation between the high Δ with high χ̃^2 (normalised between 0-1), and extreme R and L.
The CORSIKA longitudinal profile for the extreme case is shown in LR_property.
The shower shape is wider than usual and this could indicate the presence of an energetic secondary shower.
In MGMR3D the R and L parameters are fixed to central values (see mytable1)and we fit only, this can explain the large shift in predicted which arises to compensate for the difference in longitudinal profile, however, the χ̃^2 for these outliers still remains higher than the ensemble. This example clearly shows two important results. Firstly, the radio profiles are influenced by other parameters of the longitudinal profile than only . Secondly, MGMR3D is sensitive to these parameters. To extract all three parameters- R, L, and , from MGMR3D calculation requires more dedicated efforts and currently is beyond the scope of this paper. However, the outliers are only a small fraction of the total number of showers, and this would have only a small effect on the zenith based correction proposed in mytable2.
§ APPLICATION TO LOFAR DATA
lofardata
In this section we discuss various steps of applying MGMR3D to experimental data and estimate . We have used LOFAR cosmic ray data for this purpose.
Currently, LOFAR provides the highest precision for the determination of with the radio technique <cit.>. The dense core of LOFAR consists of 288 low-band dipole antennas within an area with a diameter of 320 meters, known as the Superterp.
The radio emission from air showers in the frequency range 30 – 80 MHz is recorded by the LOFAR low-band antennas <cit.>. An array of particle detectors, LORA, installed on the Superterp provides the trigger for the detection of the air showers <cit.>.
The usual reconstruction technique used at LOFAR is based on
the production of dedicated CoREAS simulation sets for each detected air
shower. The number of simulations needed to reconstruct the
shower maximum is optimized with CONEX<cit.>. A set of CORSIKA simulations with proton and iron primaries is produced for each detected cosmic ray.
The radio emission is simulated in a star-shaped pattern for antenna positions in the shower plane using CoREAS.
For each CoREAS simulation the value of as
well as the χ^2 is determined when fitting the core position to data.
for a measured shower is then
reconstructed by fitting a parabola to the χ^2 vs Monte
Carlo contour. The latest results on LOFAR cosmic ray analysis can be found in <cit.>. While such a Monte-Carlo based approach is precise, it is compute-intensive. Thus, fast alternatives such as MGMR3D are desired, where is reconstructed in a steepest descent optimization of the parametrized radio
profile to given data.
The details of applying MGMR3D to data are as followed- the quantity P_ data or P_ mgmr3d, is calculated as the time integrated voltage squared over a 55 ns window centered around the pulse maximum, and is used as
the observable. The error, σ_P, is estimated from the measurement of the noise level from data. This is the same procedure as used in <cit.>.
This implementation is different from the previous case of fitting only to simulations where the stokes parameters, integrated over the full trace, were used as observables.
The reduced χ^2 to be minimized in MGMR3D is defined as
χ ^2 =1/N∑_antennas(P_ data - P_ mgmr3d (x_ core, y_ core, ) /σ_P)^2 ,
eq_norm
and the core positions (x_core, y_core) are the free parameters of the fit. The shower energy for the MGMR3D calculation is determined from the normalization constant, see (<ref>).
In fitting to the data we have kept the longitudinal shape parameters R, and L as well as the charge excess parameter J_0Q fixed to the values given in mytable1. Including these parameters in the fit sometimes gave rise to a poor convergence without considerably improving the fit quality.
The core reconstruction from a parametrization of the radio LDF <cit.> is used as initial guesses for the core positions, the same as was also used in the CoREAS reconstruction method. In order to fit , it is seen that starting from a small value between 300-400 g/cm^2 leads to faster convergence.
The reconstructed with MGMR3D are shown in comparison to the obtained using the LOFAR reconstruction technique in xmaxfit1. The values reconstructed with MGMR3D are corrected with the zenith correction formula described in mytable2. We have also implemented the distance to based quality cut as described in simu_fit. The red line is a linear fit to the data with a slope of 0.85 and intercept 91. The shaded area is the 1-σ error on the fit.
The black line is the prediction from simulations only, as discussed (cf. xmaxfit_coreas).
From the comparison shown in xmaxfit1 an estimate can be obtained for the accuracy for ^mgmr3d.
The combined error on is calculated from the standard deviation of the gaussian fitted to the distribution of ^mgmr3d - ^CoREAS as shown in calc_err.
Assuming the errors due to MGMR3D and CoREAS reconstruction are uncorrelated the total error
σ_tot can be written as
σ^2_tot = σ^2_coreas + σ^2_mgmr3d,
σ_coreas is obtained from the mean of the distribution of errors on reconstructed with CoREAS for individual events, using a Monte-Carlo method <cit.>.
With σ_coreas= 14.5 g/cm^2 we obtain σ_mgmr3d= 22.4 g/cm^2. This value is used as the resolution of the reconstruction with MGMR3D from LOFAR data and shown in the black cross in xmaxfit1. Since for CoREAS the shower is given by a microscopic CORSIKA calculation, it is possible to obtain the error on from the quality of the fit but for MGMR3D such a procedure is not possible. The reason is that in MGMR3D calculations, parameters entering in the longitudinal profile, can easily vary well outside the physical regime.
An example of the radio profile of a reconstructed shower is shown in Appendix A for both CoREAS and MGMR3D.
§.§ Reconstruction of shower core and energy:
In deltaxmax_coreshift we show the correlation between the core positions reconstructed using MGMR3D and CoREAS reconstructed core positions.
For the majority of the showers, the core positions show good agreement between COREAS and MGMR3D reconstructions.
However, there are a few exceptions with large deviations between MGMR3D and CoREAS. This effect is not found to be correlated either with Δ nor χ^2. Some of these events are hard to reconstruct because the signal-to-noise ratio is relatively low, while others have a core that it is not well-contained by the LOFAR stations. In both cases, small differences between CoREAS and MGMR3D can have an impact that is larger than usual.
In E_LOFAR the differences in cosmic ray energy reconstruction
between MGMR3D, using (<ref>), and CoREAS are compared.
The top panel of E_LOFAR shows that there is no clear correlation between the two. The bottom panel of the figure shows the relative difference, defined as,
2 (E_MGMR3D-E_CoREAS)/(E_MGMR3D+E_CoREAS)
rel_en
to make the differences more quantitative.
This shows that there is no average offset between the two energy reconstructions. The spread of 19% in the distribution is comparable to the LOFAR energy resolution of 14% <cit.>.
§ SUMMARY AND CONCLUSIONS
The MGMR3D code, which uses an analytic parameterization of the plasma cloud, provides a promising alternative to obtain the longitudinal structure of an air shower that best reproduces the measured radio footprint through minimization. It is computationally orders of magnitude faster than its microscopic counterparts that are customarily used for analyzing radio emission from cosmic rays.
We have reported on a detailed comparison for a large ensemble of showers simulated with CoREAS and MGMR3D. This resulted in an optimized parameterization inside MGMR3D, in particular concerning the drift velocity, the charge excess, and the radial structure. With the optimized parametrization a strong agreement with microscopic CoREAS-simulations were obtained for the lateral distribution functions for radio emission with a relative difference in intensity up to 10%.
As a follow-up step we have shown that MGMR3D can be used in a chi-square fit procedure to extract the shower maximum for a large ensemble of showers simulated by CoREAS. The results show a very good agreement with a small systematic zenith-angle dependency, which is upto 6-8 g/cm^2 for zenith angles not exceeding 50 degrees. We introduce a correction formula to compensate for this. However, MGMR3D is yet not fully optimized for highly inclined showers with zenith above 65 degrees. This is a prospect for a future effort and would be useful for simulation studies for experiments such as GRAND(The Giant Radio Array for Neutrino Detection) designed for detecting highly inclined air showers.
We have also found that MGMR3D is sensitive to the effects of additional parameters corresponding to the shape of the longitudinal shower profile on the radio footprint- namely R and L. These parameters have the potential to provide further insight in mass composition, constraining hadronic model, as well as astrophysical interpretation of cosmic ray sources, in addition to <cit.>. Probing these subtle parameter spaces require extremely dense antenna layouts such as The Square Kilometer Array(SKA) <cit.>, and the required simulations also multiply by many folds, which is exhaustive for present compute-intensive Monte Carlo frameworks. MGMR3D, thus, opens up the novel opportunity of making such multi-parameter study plausible by producing large simulation sets with very little compute resources. A detailed study along these lines will be investigated in a follow up work.
As a final proof of the proposed procedure we have used MGMR3D to extract from LOFAR data that have been used in earlier studies.
An average resolution of 22 g/cm^2 is found which is competitive to the average resolution of 14.5g/cm^2 obtained using the CoREAS based method. It shows that, the latest version of MGMR3D, for specific geometries discussed in this paper, can be used as a fast and efficient tool to reconstruct shower parameters, and for high-precision studies, it can be combined with Monte Carlo simulations as a preliminary estimator to help reduce the required simulation landscape and expedite the analysis.
§ EXAMPLE FROM LOFAR DATA
A comparison of the radio profiles between CoREAS and MGMR3D for one measured shower is shown in ldf_fit1 and the corresponding reconstructed parameters are in table_reco.
In this section we have a closer look at some specific events
and study the LDF profile. First we show a couple examples with
very good agreement between MGMR3D and LOFAR reconstruction as seen
from the LDF profile in ldf_fit. The reconstructed values
are also very close- within a difference of 3-4 g/cm^2 and the reconstructed
core positions are also reasonable, a difference within 12 meters.
I propose to skip this.
To get some insight in the cause for the outliers shown in xmaxfit1 and deltaxmax_coreshift we show in ldf_fit the detailed radio footprint for some of these outlier events.
o get some insight in the cause for the outliers shown in xmaxfit1 and deltaxmax_coreshift we show in ldf_fit the detailed radio footprint for some of these outlier events.
Some remarks concerning ldf_fit: I feel that the footprint showing the lofar antennas should stay. The plot of chi-square as function of xmax should go, or it should be discussed in detail in the text. The comparison of the two CoREAS and MGMR3D lateral distribution functions compared to data should remain, however for all either plot the model over the data (preferred) or the other-way around & and use the same scales on the axes for roreas and mgmr. Show the pull plot for both or neither. Make the axes labels readable. N.B. I have changed one panel in the plot where there was something obviously wrong, please check all.
These results show that for these cases where there is a large discrepancy between the CoREAS and the MGMR3D results the error-bars on the data are large and either fit seems acceptable.
§ PROGRAMMING DETAILS
The latest version of the program can be downloaded from <cit.>. This version contains the improved parametrizations, realistic errormodel discussed in this paper, as well as the functionality to include antenna response functions, relevant for the application to measured data.
§ ACKNOWLEDGEMENT
P. Mitra acknowledges financing by the Polish National Agency for Academic Exchange within Polish Returns Program no. PPN/PPO/2020/1/00024/U/00001. This research is also funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant No. 103.01-2019.378. BMH is funded by ERC Grant agreement No. 101041097. N.Karastathis acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Projektnummer 445154105. ST acknowledges funding from the Khalifa University Startup grant, project code 8474000237-FSU-2020-13. LOFAR, the Low Frequency Array designed and constructed by ASTRON, has facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the International LOFAR Telescope foundation under a joint scientific policy.
|
http://arxiv.org/abs/2307.05883v1 | 20230712030815 | An $L^2$ Dolbeault lemma on higher direct images and its application | [
"Chen Zhao"
] | math.CV | [
"math.CV",
"math.AG"
] |
plain
thmTheorem[section]
lem[thm]Lemma
cor[thm]Corollary
cor*[thm]Corollary*
prop[thm]Proposition
prop*[thm]Proposition*
conj[thm]Conjecture
definition
constructionConstruction
notations[thm]Notations
question[thm]Question
prob[thm]Problem
rmk[thm]Remark
remarks[thm]Remarks
defn[thm]Definition
claim[thm]Claim
assumption[thm]Assumption
assumptions[thm]Assumptions
properties[thm]Properties
exmp[thm]Example
comments[thm]Comments
blank[thm]
observation[thm]Observation
defn-thm[thm]Definition-Theorem
*SettingSetting
] An L^2 Dolbeault lemma on higher direct images and its application
Chen Zhao]Chen Zhao
[email protected]
School of Mathematical Sciences,
University of Science and Technology of China, Hefei, 230026, China
Given a proper holomorphic surjective morphism f:X→ Y from a compact Kähler manifold to a compact Kähler manifold, and a Nakano semipositive holomorphic vector bundle E on X, we prove Kollár type vanishing theorems on cohomologies with coefficients in R^qf_∗(ω_X(E))⊗ F, where F is a k-positive vector bundle on Y.
The main inputs in the proof are the deep results on the Nakano semipositivity of the higher direct images due to Berndtsson and Mourougane-Takayama, and an L^2-Dolbeault resolution of the higher direct image sheaf R^qf_∗(ω_X(E)), which is of interest in itself.
[
[
August 12, 2023
===================
§ INTRODUCTION
Let f:X→ Y be a proper holomorphic surjective morphism from a compact Kähler manifold X to a compact Kähler manifold Y of dimension m. Let ω_X be the canonical line bundle on X and let E be a Nakano semipositive vector bundle on X. The main purpose of this article is to show the following Kollár type vanishing theorem.
Let F be a k-positive Hermitian vector bundle on Y of rank r. Then
H^i(Y,R^qf_∗(ω_X(E))⊗ F)=0, ∀ i≥ 1,k≥min{_Y-i+1,r}.
Here ω_X(E):=ω_X⊗ E and R^qf_∗(-) denotes the qth higher direct image sheaf.
When F is Nakano positive, it reduces to a special case of Matsumura's Kollár-Ohsawa type vanishing theorem <cit.>.
As a corollary, we can deduce the following vanishing theorems.
Let F,F_1,…,F_l be holomorphic vector bundles on Y and let L be a holomorphic line bundle on Y. Then the following hold.
* If F is ample, L is nef and rank(F)>1, then
H^i(Y,R^qf_∗(ω_X(E))⊗ S^kF ⊗( det F)^2⊗ω_Y⊗ L)=0
for any i ≥ 1 and k ≥max{m- rank(F), 0}.
* If F is ample, L is nef and rank(F)>1, then
H^i(Y,R^qf_∗(ω_X(E))⊗ F ⊗ ( det F)^k ⊗ω_Y⊗ L)=0
for any i ≥ 1 and k ≥max{m+1- rank(F), 2}.
* Let rank(F)> 1. If F is ample and L is nef, or F is nef and L is ample,
then
H^i(Y,R^qf_∗(ω_X(E))⊗ S^mF^∗⊗ ( det F)^t ⊗ L)=0
for any i≥1 and t ≥ rank(F)+m-1.
* If all F_j are ample and L is nef, or, all F_j are nef and L is ample,
then
H^i(Y,R^qf_∗(ω_X(E))⊗ S^k_1F_1⊗⋯⊗ S^k_lF_l⊗ detF_1⊗⋯⊗ det F_l⊗ L)=0
for any i≥1 and k_1 ≥ 0,…, k_l≥0.
* If F is Griffiths positive and rank(F)≥ 2, then
H^i(Y,R^qf_∗ (ω_X(E))⊗ F^∗⊗ ( detF)^k)=0
for any i≥ 1 and k≥min{m-i+1, rank(F)}.
* If 0→ S→ F→ Q→ 0 is an exact sequence of Hermitian vector bundles and F>_k 0, then
H^i(Y,R^qf_∗ (ω_X(E))⊗ S⊗ ( detQ)^k)=0
for any i≥ 1 and k≥min{m-i+1, rank(S)}.
This generalizes Kodaira-Nakano vanishing theorem Kodaira1953,Nakano1955, Kollár's vanishing theorem <cit.>, Ohsawa's vanishing theorem <cit.>, Griffiths's vanishing theorem <cit.>, Liu-Sun-Yang vanishing theorems <cit.>, some cases of Le Potier's vanishing theorem <cit.>, Demailly's vanishing theorem <cit.> and Manivel's vanishing theorem <cit.>. Further related works include LN2004,LN2005,Iwai2021,Mat2022,EV1987,Hoffman1989,Mat2016,Fujino2018,Inayama2020,LY2015.
There are two main inputs in the proof of Theorem <ref>. The first involves the significant findings of Berndtsson <cit.> and Mourougane-Takayama <cit.> regarding the Nakano semipositivity of the higher direct image R^qf_∗(ω_X/Y(E)) over the dense Zariski open subset Y^o of Y, where f is a submersion over Y^o. The positivity of higher direct image sheaves is of great importance in recent developments in complex algebraic geometry. Interested readers may refer to Berdtsson2009,BP2008,BPW2022,Schumacher2012,Naumann2021,MT2008,Horing2010,Takayama2023,Viehweg2001 and the references therein. One of the main challenges in proving Theorem <ref> is the presence of singular fibers. As a result, canonical metrics, such as the Hodge metric defined by Mourougane-Takayama <cit.>, on the torsion-free sheaf R^qf_∗(ω_X/Y(E)) have singularities along Y\ Y^o. This difficulty is overcome by establishing the L^2-Dolbeault resolution of R^qf_∗(ω_X(E)), which is the second input of the present article. The resolution is achieved by using differential forms on Y^o that have locally finite L^2-norms at the boundary Y\ Y^o. This resolution enables us to investigate R^qf_∗(ω_X(E)) by analyzing the L^2-forms on the non-degenerate loci Y^o of f. This technique builds upon the ideas developed in SZ2022,SZ2023, which trace their roots to the proof of MacPherson's conjecture on the L^2-Dolbeault resolution of the Grauert-Riemenschneider sheaf <cit.> and the L^2-Dolbeault lemma established in the context of a variation of Hodge structure by Zucker <cit.>.
The MacPherson conjecture <cit.>, which stated that the and was proved by Pardon-Stern <cit.> and generalized by Ruppenthal <cit.>, builds a bridge between cohomology of the resolution of Y and L^2 cohomology of Y_ reg. An appropriate L^2 Dolbeault lemma, combined with the L^2 vanishing theorems on complete Kähler manifolds by Ohsawa <cit.>, leads us to the proof of the Kollár type vanishing theorems for R^qf_∗ω_X(E) when E is a Nakano positive.
Let us explain the technique of the paper in more detail.
Let Y^o be the dense Zariski open subset of Y such that f^o:X^o→ Y^o is a proper holomorphic submersion, where X^o denotes f^-1(Y^o) and f^o denotes f|_X^o. Then R^qf^o_∗(ω_X^o/Y^o(E))≃ R^qf_∗(ω_X/Y(E))|_Y^o is locally free <cit.> and admits a smooth Hodge metric h in the sense of Mourougane-Takayama <cit.> whose curvature is Nakano semipositive.
Let ds_Y^2 be a Hermitian metric on Y.
Let ^m,k_Y(R^qf^o_∗(ω_X^o/Y^o(E))) denote the sheaf of measurable R^qf^o_∗(ω_X^o/Y^o(E))-valued (m,k)-forms α such that α and its distributive α are locally square integrable near every point of Y with respect to ds_Y^2 and the Hodge metric h <cit.> on R^qf^o_∗(ω_X^o/Y^o(E)). Define
^m,∙_Y(R^qf^o_∗(ω_X^o/Y^o(E)))=^m,0_Y(R^qf^o_∗(ω_X^o/Y^o(E)))→⋯→^m,m_Y(R^qf^o_∗(ω_X^o/Y^o(E))),
the associated L^2-Dolbeault complex.
The main technical result of the present paper is the following L^2-Dolbeault lemma.
_Y^m,∙(R^qf^o_∗( ω_X^o/Y^o(E)))
is a fine resolution of R^qf_∗(ω_X(E)) for every q.
Theorem <ref> holds for an arbitrary compact complex space Y. Readers may see 3 (especially Theorem <ref>) for details.
Notations:
* Let X be a complex space. A Zariski closed subset (=closed analytic subset) Z of X is a closed subset, that is locally defined as the zeros of a set of holomorphic functions. A subset Y of X is called Zariski open if X\ Y is Zariski closed.
* Two metrics g_1 and g_2 are said to be quasi-isometric (written g_1∼ g_2) if there exists a constant C such that C^-1g_2≤ g_1 ≤ C g_2.
§ PRELIMINARY
§.§ Hermitian vector bundle
Let (M,ds_M^2) be a complex manifold of dimension n with a Hermitian metric ds_M^2.
Let (F,h_F) be a holomorphic vector bundle of rank r on M endowed with a Hermitian
metric h_F and let (F^∗, h_F^∗) be its dual Hermitian bundle.
Let A^p,q(M,F) be the space of F-valued smooth (p, q)-forms on M
and let A_0^p,q(M,F) be its subspace with compact support.
Let ∗:A^p,q(M,F)→ A^n-q,n-p(M,F) be the Hodge star operator relative to ds_M^2 and let ♯_F:A^p,q(M,F)→ A^q,p(M,F^∗) be the anti-isomorphism induced by h_F. Denote by ⟨-,-⟩ the pointwise inner product on A^p,q(M,F). These operators are related by
⟨α,β⟩ vol_ds_M^2=α∧∗♯_Fβ.
Let
(α,β):=∫_M⟨α,β⟩ vol_ds_M^2
and α:=√((α,α)).
Let ∇=D'+ be the Chern connection relative to h_F. Let
^∗_h_F=-∗ D'∗ and D'^∗_h_F=-∗∗ be the formal adjoints of and D' respectively.
Denote by Θ_h_F=∇^2 the curvature of (F,h_F).
Locally we write
Θ_h_F=√(-1)∑_i,jω_ije_i⊗ e_j^∗
where ω_ij∈ A^1,1_M, (e_1,…,e_r) is an orthogonal local frame of F and (e^∗_1,…,e^∗_r) is the dual frame.
<cit.>
* A tensor u∈ T_M⊗ F is said to be of rank m if m is the smallest ≥ 0 integer such that u can be written as
u=∑_j=1^m ξ_j⊗ s_j,ξ_j∈ T_M,s_j∈ F.
* F is called m-positive if √(-1)Θ_h_F(F)(u,u)>0 for any nonzero u∈ T_M⊗ F of rank ≤ m. In this case, we write Θ_h_F(F)>_m0 (or F>_m 0).
* F is called Griffiths positive if m=1, and Nakano positive if m≥min{n,r}.
* F is called Nakano semipositive, if the bilinear form
θ(u_1,u_2):=∑_i,jω_i,j(u_1i,u_2j), u_l=∑_iu_li⊗ e_i∈ T_M⊗ F, l=1,2
is semi-positive definite.
§.§ L^2-Dolbeault cohomology and L^2-Dolbeault complex
Let L^p,q_(2)(M,F) be the space of measurable F-valued (p,q)-forms on M which are square integrable with respect to ds^2_M and h_F. Although L^p,q_(2)(M,F) depends on the choice of ds^2_M and h_F, we will omit them in the notation when there is no confusion. Let _ max denote the maximal extension of the operator defined on the domains
D^p,q_ max(M,F):=Dom^p,q(_ max)={ϕ∈ L_(2)^p,q(M,F)|ϕ∈ L_(2)^p,q+1(M,E)},
where is defined in the sense of distribution.
The L^2 cohomology H_(2)^p,q(M,F) is defined as the q-th cohomology of the complex
D^p,∙_ max(M,F):=D^p,0_ max(M,F)_ max→⋯_ max→D^p,n_ max(M,F).
Let Y be an irreducible complex space of dimension m, and let Y^o be a dense Zariski open subset of its regular locus Y_ reg. Let ds_Y^2 be a Hermitian metric on Y^o and let (E,h) be a Hermitian vector bundle on Y^o.
Given an open subset U of Y, the space L_Y^p,q(E)(U) is defined as the space of measurable E-valued (p,q)-forms α on U∩ Y^o such that for every point x∈ U, there exists a neighborhood V_x of x in Y such that
∫_V_x∩ Y^o|α|^2_ds_Y^2,h vol_ds_Y^2<∞.
For each 0≤ p,q≤ m, we define the L^2-Dolbeault sheaf _Y^p,q(E) on Y as follows:
_Y^p,q(E)(U):={ϕ∈ L_Y^p,q(E)(U)|∂̅_ maxϕ∈ L_Y^p,q+1(E)(U)}, ∀ open subset U of Y.
Now the L^2-Dolbeault complex of sheaves _Y^p,∙(E) is defined as:
_Y^p,0(E)→_Y^p,1(E)→⋯→_Y^p,m(E),
where is taken in the sense of distribution.
The L^2 cohomology and the L^2-Dolbeault sheaf are invariants of the quasi-isometry class of ds_M^2, h_F, ds_Y^2 and h.
A Hermitian metric ds_0^2 on Y^o is called a Hermitian metric on Y if, for every x∈ Y, there exists a neighborhood U of x in Y and a holomorphic closed immersion U⊂ V into a complex manifold such that ds_0^2|_U∩ Y^o∼ ds^2_V|_U∩ Y^o for some Hermitian metric ds^2_V on V. If the (1,1)-form associated with ds_0^2 is moreover d-closed on Y^o, we then call ds_0^2 a Kähler metric on Y.
The L^2-Dolbeault sheaf with respect to a Hermitian metric ds_0^2 on Y is always fine, as shown by the following lemma.
<cit.>
Suppose that for every point x∈Y∖ Y^o there exists a neighborhood U_x of x in Y and a Hermitian metric ds^2_0 on U_x such that ds^2_0|_Y^o∩ U_x≲ ds_Y^2|_Y^o∩ U_x. Then the L^2-Dolbeault sheaf ^p,q_Y(E) with respect to ds_Y^2 and h_E is a fine sheaf for every p and q.
§.§ Harmonic theory on higher direct images
In this section, we briefly review the harmonic theory on higher direct images presented in <cit.>. This theory is a generalization of Takegoshi's work <cit.> to complex spaces and will be used in proving Theorem <ref>.
Let f:X→ Y be a proper surjetive holomorphic morphism from a compact Kähler manifold to an irreducible analytic space with X=n+m and Y=m respectively. Let Y^o be the dense Zariski open subset of the loci Y_ reg of the regular points of Y such that f^o:=f|_X^o:X^o:=f^-1(Y^o)→ Y^o is a proper holomorphic submersion. Let (E,h_E) be a Nakano semipositive holomorphic vector bundle on X. As <cit.>, we fix a Kähler metric ds^2 on X^o
such that the following conditions hold.
* For every point x∈ X there is a neighborhood U of x, a function Φ∈ C^∞(U∩ X^o) such that |Φ|+|dΦ|_ds^2<∞ and ds^2|_U∩ X^o∼√(-1)Φ.
* ds^2 is locally complete on X, i.e., there exists for every point x∈ X a neighborhood U of x such that (U∩ X^o,ds^2) is complete.
* ds^2 is locally bounded from below by a Hermitian metric, i.e., there exists, for every point x∈ X, a neighborhood U of x and a Hermitian metric ds^2_0 on U such that ds^2_0|_U≲ ds^2|_U.
Let ω denote the Kähler form of ds^2.
Let U⊂ Y be a Stein open subset. Let (f^-1(U)) be the nonempty set of C^∞ plurisubharmonic functions φ:f^-1(U)→(-∞,c_∗) for some c_∗∈(-∞,∞] such that
{z∈ f^-1(U)|φ(z)<c} is precompact in f^-1(U) for every c<c_∗.
For every C^∞ plurisubharmonic function φ∈(f^-1(U)), set the subspace of E-valued L^2 harmonic (n+m,q)-forms with respect to ω and h_E:
^m+n,q(f^-1(U),E,φ):={α∈^m+n,q_X(E)(f^-1(U))|α=^∗_h_Eα=0, e(φ)^∗α=0},
where e(φ)^∗ denotes the adjoint operator of the left exterior product acting on A^m+n,q(f^-1(U),E) by a form φ∈ A^0,1(f^-1(U)) with respect to the inner product induced by h_E. We would like to point out that the equalities on the right-hand side of (<ref>) are only required to hold on X^o∩ f^-1(U), not on the whole of f^-1(U).
By the regularity theorem for elliptic operators of second order, every element of ^m+n,q(f^-1(U),E,φ) is C^∞ on X^o∩ f^-1(U).
Let φ,ψ∈(f^-1(U)) be arbitrary C^∞ plurisubharmonic functions. Then
^m+n,q(f^-1(U),E,φ)=^m+n,q(f^-1(U),E,ψ)
for every q≥ 0 (<cit.>). Therefore we
use the notation ^m+n,q(f^-1(U),E) instead of ^m+n,q(f^-1(U),E,ψ) in the sequel.
According to <cit.>, the restriction map
^m+n,q(f^-1(V),E)→^m+n,q(f^-1(U),E)
is well defined for any pair of Stein open subsets U⊂ V⊂ Y. Hence the data
U↦^m+n,q(f^-1(U),E), ∀ Stein open subset U of Y
forms a presheaf on Y. We denote by ^m+n,q_f(E) its
sheafification.
<cit.>
^m+n,q_f(E) is a sheaf of _Y-modules, and there exists a natural isomorphism
τ_f:R^qf_∗(ω_X(E))→^m+n,q_f(E)
of _Y-modules for every q≥0. Moreover
^m+n,q_f(E)(U)=^m+n,q(f^-1(U),E)
for any Stein open subset U⊂ Y.
For every 0≤ p≤ m+n, we define the L^2-Dolbeault sheaf ^p,0_X(E) with respect to ω and h_E as in 2.2, and define a subsheaf Ω^p_X,(2)(E) as
Ω^p_X,(2)(E)(U)={α∈^p,0_X(E)(U)|α=0}, ∀ open subset U of X.
The Hodge star operator ∗ relative to ω yields a splitting homomorphism
δ^q : R^q f_∗(ω_X(E))τ_f≃^m+n,q_f(E) f_∗ (Ω_X,(2)^m+n-q(E))
with ^q∘δ^q= Id for the homomorphism
^q:f_∗(Ω_X,(2)^m+n-q(E))→^m+n,q_f(E)≃ R^qf_∗(ω_X(E))
induced by the q-times left exterior product by ω. Moreover, the image of δ^q|_Y^o lies in f^o_∗(Ω^n-q_X^o(E)⊗ f^∗Ω^m_Y^o).
See <cit.> and the proof of Theorem 4.1 in <cit.>.
§ L^2-DOLBEAULT RESOLUTION OF THE HIGHER DIRECT IMAGE SHEAF
Let f:X→ Y be a proper holomorphic surjective morphism from a compact Kähler manifold to an irreducible complex space, where X=n+m and Y= m. Let (E,h_E) be a Nakano semipositive vector bundle on X, and let ds_Y^2 be a Hermitian metric on Y.
Let Y^o be the dense Zariski open subset of Y_ reg such that f^o:X^o→ Y^o is a proper holomorphic submersion, where X^o denotes f^-1(Y^o) and f^o denotes f|_X^o.
According to <cit.>, R^qf^o_∗(ω_X^o/Y^o(E))≃ R^qf_∗(ω_X/Y(E))|_Y^o is locally free. Here, ds^2 is a Kähler metric on X^o as described in 2.3, with ω being its associated Kähler form.
§.§ Mourougane-Takayama's Hodge metric on R^qf^o_∗(ω_X^o/Y^o(E))
This section provides a review of Mourougane-Takayama's construction of a Hodge metric on R^qf^o_∗(ω_X^o/Y^o(E)) with Nakano semipositive curvature <cit.>. For more details, see MT2007,MT2008,MT2009.
Define the sheaf _f^n+m,q(E,h) associated with the proper submersion f:X→ Y as described in 2.3. It follows from Theorem <ref> that there exists a natural isomorphism
τ: R^qf_∗(ω_X(E))≃_f^n+m,q(E).
Denote
τ^o:=τ|_Y^o:R^qf^o_∗(ω_X^o(E))≃_f^n+m,q(E)|_Y^o=:_f^o^n+m,q(E).
Let y∈ Y^o and let W≃{t=(t_1,…,t_m)∈^m|t<1} be holomorphic coordinates centered at y.
Let X^o_W denote (f^o)^-1(W) and let dt denote dt_1∧⋯∧ dt_m.
Take a trivialization _WΩ_W^m given by 1↦ dt.
This trivialization induces an isomorphism of sheaves Ω_X^o_W/W^n≃Ω_X^o_W/W^n⊗ (f^o)^∗Ω_W^m≃ω_X^o_W via u↦ u∧ dt. Consequently, the isomorphism extends to higher direct image sheaves as follows:
α^q_W: R^qf^o_∗ (ω_X^o/Y^o(E))|_WR^qf^o_∗(ω_X^o(E))|_W.
We also have an injection Ω_X^o_W/W^n-q→Ω_X^o_W^n+m-q by σ↦σ∧ dt.
This injection induces the injection
β_W:f^o_∗(Ω_X^o/Y^o^n-q(E))|_W→ f^o_∗(Ω_X^o^n+m-q(E))|_W.
Notice that for every u∈^n+m,q(X^o_W,E), there exists σ_u∈ H^0(X^o_W,Ω_X^o_W/W^n-q(E)) such that ∗ u=σ_u∧ dt (see Proposition <ref> or <cit.>). Therefore, the map u ↦σ_u is well-defined and injective, and thus yields a homomorphism
δ^q_W:_f^o^n+m,q(E)|_W→ f^o_∗(Ω_X^o/Y^o^n-q(E))|_W,
where
∗=β_W∘δ^q_W:_f^o^n+m,q(E)|_W→ f^o_∗(Ω_X^o/Y^o^n-q(E))|_W→ f^o_∗ (Ω_X^o^n+m-q(E))|_W.
Then the composition map
S_W^q:R^qf^o_∗( ω_X^o/Y^o(E))|_WR^qf^o_∗(ω_X^o(E))|_W_f^o^n+m,q(E)|_W f^o_∗ (Ω_X^o/Y^o^n-q(E))|_W
is injective.
For every y∈ W and every pair of vectors u_y, v_y∈ R^qf^o_∗(ω_X^o/Y^o(E))_y, Mourougane-Takayama defined
h(u_y, v_y)= c_n-q/q!∫_f^-1(y)(ω^q∧ S^q_W(u_y) ∧_h_ES^q_W(v_y))|_f^-1(y), c_n-q=√(-1)^(n-q)^2
.
The induced metric h is independent of the choice of the coordinate W and thus defines a global Hermitian metric on the bundle R^qf^o_∗(ω_X^o/Y^o(E)) (<cit.>). This Hermitian metric h is then called the Hodge metric on R^qf^o_∗(ω_X^o/Y^o(E)).
(<cit.>)
√(-1)Θ_h(R^qf^o_∗ (ω_X^o/Y^o(E))) is Nakano semipositive.
Now we define _Y^m,∙(R^qf^o_∗(ω_X^o/Y^o(E))) as the associated L^2-Dolbeault complex
with respect to ds_Y^2 and h. The main result concerning this complex is the following.
_Y^m,∙(R^qf^o_∗( ω_X^o/Y^o(E)))
is a fine resolution of R^qf_∗(ω_X(E)) for every q.
§.§ Exactness of _Y^m,∙(R^qf^o_∗(ω_X^o/Y^o(E)))
Now let us introduce the following L^2 estimate, which is essentially due to Hörmander <cit.> and Andreotti-Vesentini <cit.>. Here we use the version suitable for our purpose as stated in Demailly1982,Demailly2012,Ohsawa1980,Ohsawa1984.
(<cit.>, <cit.> and <cit.>)
Let M be a complex manifold of dimension n that admits a complete Kähler metric. Let (F,h_F) be a Hermitian holomorphic vector bundle on M such that
√(-1)Θ_h_F(F)≥ω_0⊗ Id_F
for some (not necessarily complete) Kähler form ω_0 on M. Then for every q>0 and every α∈ L^n,q_(2)(M,F;ω_0,h_F) such that α=0, there exists β∈ L^n,q-1_(2)(M,F;ω_0,h_F) such that β=α and β^2_ω_0,h_F≤ q^-1α^2_ω_0,h_F.
The above theorem works effectively locally on complex analytic singularities due to
the following lemma by Grauert <cit.> (see also <cit.>).
Let x be a point in a complex analytic space Y and let Y^o be a dense
Zariski open subset of Y_ reg. Then there exists a neighborhood U of x in Y and a complete Kähler metric on U ∩ Y^o.
The main purpose of this subsection is the following theorem.
The complex of sheaves ^m,∙_Y(R^qf^o_∗(ω_X^o/Y^o(E)))
is exact at ^m,q_Y(R^qf^o_∗(ω_X^o/Y^o(E))) for every q>0.
Since the problem is local, we may assume that Y is a germ of complex analytic space and ds_Y^2 is quasi-isometric to some Kähler form √(-1)Φ, where Φ is some bounded C^∞ strictly plurisubharmonic function on Y. Thus C√(-1)∂Φ≥ω_Y for some constant C>0, where ω_Y denotes the Kähler form associated with ds_Y^2.
Let h'=e^-CΦh be a modified Hermitian metric on R^qf^o_∗(ω_X^o/Y^o(E)). Theorem <ref> yields that
√(-1)Θ_h'(R^qf^o_∗(ω_X^o/Y^o(E))) = C√(-1)∂Φ⊗ Id+√(-1)Θ_h(R^qf^o_∗(ω_X^o/Y^o(E)))≥ω_Y⊗ Id.
Since Y is compact, we may assume that Y^o admits a complete Kähler metric by using Lemma <ref>. As Φ is bounded, we have h' ∼ h. By Theorem <ref>, we obtain that
H^m,q_(2)(Y^o, R^qf^o_∗(ω_X^o/Y^o(E));ω_Y,h)=H^m,q_(2)(Y^o, R^qf^o_∗(ω_X^o/Y^o(E));ω_Y,h')=0, ∀ q>0,
which proves the theorem.
§.§ Proof of Theorem <ref>
Recall that ds_Y^2 is a Hermitian metric on Y. It follows from Lemma <ref> that all _Y^m,i(R^qf^o_∗(ω_X^o/Y^o(E))) are fine sheaves.
To prove Theorem <ref> it remains to show the following theorem.
There is an isomorphism between R^qf_∗(ω_X(E)) and
ker(:_Y^m,0(R^qf^o_∗(ω_X^o/Y^o(E)))→_Y^m,1(R^qf^o_∗(ω_X^o/Y^o(E)))).
The main idea is to regard both sheaves as subsheaves of j_∗(R^qf^o_∗(ω_X^o(E))) where j:Y^o→ Y is the immersion, and show that the sections of both sheaves share the same boundary conditions. For the sake of convenience, we let = ker(:_Y^m,0(R^qf^o_∗(ω_X^o/Y^o(E)))→_Y^m,1(R^qf^o_∗(ω_X^o/Y^o(E)))).
§.§.§ Boundary condition of R^qf_∗(ω_X(E))
Since R^qf_∗(ω_X(E)) is torsion free (<cit.>), there exists a natural embedding R^qf_∗(ω_X(E))⊂ j_∗(R^qf^o_∗(ω_X^o(E))).
Let _f^m+n,q(E) be the sheaf defined in 2.3 with respect to ds^2 and h_E.
Then Theorem <ref> yields a natural isomorphism
τ_f: R^qf_∗(ω_X(E))≃_f^m+n,q(E).
Therefore for any Stein open subset U of Y and any section s∈ R^qf^o_∗(ω_X^o(E))(U∩ Y^o), s can be extended to a section of R^qf_∗(ω_X(E))(U) if and only if s satisfies the following boundary condition:
(I): τ_f(s) is an E-valued harmonic (m+n,q)-form on f^-1(U)∩ X^o which is locally L^2 at every point of f^-1(U) (with respect to ds^2 and h_E) and satisfies that e(φ)^∗τ_f(s)=0 on f^-1(U)∩ X^o for any φ∈(f^-1(U)).
§.§.§ Boundary condition of
By the classical Dolbeault Lemma one has
R^qf^o_∗ (ω_X^o(E))=|_Y^o. As a result, there is a natural embedding ⊂ j_∗(R^qf^o_∗ (ω_X^o(E))). For any Stein open subset U of Y and any section s∈ R^qf^o_∗(ω_X^o(E))(U∩ Y^o), s can be extended to a section of (U) if and only if s satisfies the following boundary condition:
(II): s is locally L^2 (with respect to ds_Y^2 and the Hodge metric h) at every point of U.
§.§.§ Comparison of the boundary conditions
Let U be a Stein open subset of Y and let s be a section of R^qf^o_∗(ω_X^o(E))(U∩ Y^o). We are going to show that s satisfies Condition (I) if and only if it satisfies Condition (II). First, we need the following lemma.
For every open subset U of Y and every s∈ R^qf^o_∗(ω_X^o(E))(U∩ Y^o), it holds that
∫_U∩ Y^o|s|_ds_Y^2,h^2 vol_ds_Y^2=c_n-q/c_n+m-q∫_f^-1(U)∩ X^o|τ_f(s)|_ds^2,h_E^2 vol_ds^2, c_d=√(-1)^d^2.
By using a partition of unity on U∩ Y^o we may assume that W=U∩ Y^o is small enough that satisfies the following conditions.
* There is a holomorphic global coordinate t=(t_1,…,t_m) on W such that (W;t) is a unit ball in ^m.
* There is a finite set of holomorphic local coordinates {(U^α; z^α, t)}_α∈ I of f^-1(W) such that f^-1(W)⊂∪_α∈ IU^α and f|_U^α is defined by (z^α,t)↦ t. Namely z^α=(z^α_1,…,z^α_n) are holomorphic local coordinates on the fiber f^-1({t=0}).
* There is a partition of unity 1=∑_α∈ Iρ_α on f^-1(W) such that supp(ρ_α)⊂ U^α for every α∈ I.
Let s=dt⊗ u where dt:=dt_1∧⋯∧ dt_m is a local frame of ω_W and u is a section of R^qf^o_∗(ω_X^o/Y^o(E))(W).
Some computations yield that
∫_W|s|_ds_Y^2,h^2 vol_ds_Y^2
=∫_W |u|_h^2dt∧ dt̅
=∫_y∈ Wdt∧ dt̅∫_f^-1{y}c_n-q/q!(ω^q∧ S^q_W(u_y)∧_hS^q_W(u_y))|_f^-1{y}
=∑_α∈ Ic_n-q/q!∫_U^αρ_αω^q∧ (∗τ_f(s))∧_h (∗τ_f(s)) (Fubini theorem)
= ∑_α∈ Ic_n-q/c_n+m-q∫_U^αρ_ατ_f(s)∧_h∗τ_f(s)
=c_n-q/c_n+m-q∫_f^-1(W)∩ X^o|τ_f(s)|_ds^2,h_E^2 vol_ds^2.
Notice that U∩ Y^o may not be Stein. It follows from Theorem <ref> that τ_f(s) is an E-valued harmonic (m+n,q)-form on f^-1(U)∩ X^o such that there exists a covering U∩ Y^o=∪ V_i of Stein open subsets V_i and φ_i∈(f^-1(V_i)) such that e(φ_i)^∗τ_f(s|_V_i)=0 for every i.
Now we assume that s satisfies Condition (I).
It follows from Lemma <ref> that s is locally L^2 at every point of U (with respect to ds^2 and h_E) if and only if τ_f(s) is locally L^2 (with respect to ds_Y^2 and h) at every point of f^-1(U). This shows that s satisfies Condition (II).
To prove the converse, we assume that s satisfies Condition (II).
Lemma <ref> shows that τ_f(s) is a harmonic form which is locally L^2 at every point of f^-1(U). Notice that ∗τ_f(s)∈Γ(X^o∩ f^-1(U),Ω^n-q_X^o∩ f^-1(U)(E)⊗ f^∗Ω^m_U∩ Y^o) by Proposition <ref>. Consequently, f^∗(φ)∧∗τ_f(s)=0,∀φ∈(f^-1(U)) for the reason of bi-degree. Thus, e(φ)^∗τ_f(s)=0, which indicates that s satisfies Condition (I).
The proof of Theorem <ref> is now complete.
§ APPLICATIONS TO KOLLÁR TYPE VANISHING THEOREMS
To establish the main vanishing theorem, it is necessary to introduce the following estimate. <cit.>
Let (M, ω̃) be a complete Kähler manifold of dimension n, ω another Kähler metric,
possibly non-complete, and E a m-semi-positive vector bundle of rank r on M. Let g∈ L_(2)^n,q(X,E) be such that D”g = 0 and
∫_M⟨ A_q^-1g,g⟩ dV<+∞
with respect to ω, where A_q=[iΘ(E),Λ] in bidegree (n, q) and q ≥1,
m ≥min{n-q + 1, r}. Then there exists f∈ L_(2)^n,q-1(M,E) such that D”f = g and
f^2≤∫_M⟨ A_q^-1g,g⟩ dV.
Let f:X→ Y be a proper holomorphic surjective morphism from a compact Kähler manifold to a compact Kähler manifold. Let E be a Nakano semipositive vector bundle on X and let (F,h_F) be a k-positive Hermitian vector bundle on Y of rank r. Then
H^i(Y,R^qf_∗(ω_X(E))⊗ F)=0, ∀ i≥ 1,k≥min{_Y-i+1,r}.
First, we assert that
R^qf_∗ω_X(E)⊗ F→^_,∙_Y(R^qf^o_∗(ω_X^o/Y^o(E⊗ f^∗ F)))
is a fine resolution for each q.
Since the problem is local, we consider an arbitrary point y∈ Y and let V be an open neighborhood around y in Y so that F|_V≃_V^⊕ r and the metric h_F is quasi-isometric to the trivial metric.
Consequently, E ⊗ f^∗ F is Nakano semipositive on V. By applying Theorem <ref>, we confirm the validity of the claim.
Thus there exists an isomorphism
H^i(Y,R^qf_∗(ω_X(E))⊗ F)≃ H^_Y,i_(2)(Y^o,R^qf_∗(ω_X/Y(E⊗ f^∗(F)))|_Y^o;ds_Y^2,h⊗ h_F), ∀ i,
where h is the Hodge metric on R^qf^o_∗(ω_X^o/Y^o(E)) and
R^qf_∗(ω_X(E⊗ f^∗(F)))|_Y^o≃ R^q f^o_∗(ω_X^o/Y^o(E))⊗ F|_Y^o.
Since F is k-positive, the Hermitian operator [√(-1)Θ_h_F(F),Λ] is positive on
Λ^_Y,iT^∗_Y⊗ F for each i≥ 1,k≥min{_Y-i+1,r}
( <cit.>).
Since Y is compact, by applying Theorem <ref> we can conclude that [√(-1)Θ_h⊗ h_F(R^q f^o_∗(ω_X^o/Y^o(E))⊗ F),Λ] has a positive lower bound on
(Λ^_Y,iT^∗_Y⊗ R^q f^o_∗(ω_X^o/Y^o(E))⊗ F) for each i≥ 1,k≥min{_Y-i+1,r}.
Additionally, the compactness of Y implies that there is a globally defined complete Kähler metric on Y^o, as shown in <cit.>. By applying Lemma <ref>, we obtain the result that
H^_Y,i_(2)(Y^o,R^qf_∗(ω_X/Y(E⊗ f^∗(F)))|_Y^o)=0, i>0.
Therefore, the theorem is proven.
Assume that Y is a non-singular projective m-fold and let F,F_1,…,F_l be vector bundles over Y. According to <cit.> one has the following results:
* If F is ample, L is nef and rank(F)>1, then S^kF ⊗( det F)^2⊗ω_Y⊗ L is Nakano positive for k ≥max{m- rank(F), 0} and ω_Y⊗ F ⊗ ( det F)^k ⊗ L is Nakano positive for k ≥max{m+1- rank(F), 2}.
* If F is ample and L is nef, or F is nef and L is ample, then S^mF^∗⊗ ( det F)^t ⊗ L is Nakano positive for t ≥ rank(F)+m-1.
* If all F_j are ample and L is nef, or, all F_j are nef and L is ample, then S^k_1F_1⊗⋯⊗ S^k_lF_l⊗ detF_1⊗⋯⊗ det F_l⊗ L is Nakano positive for any k_1 ≥ 0,…, k_l≥0.
According to <cit.> one also obtain the following results:
* If F is Griffiths positive of rank r≥ 2, then F^∗⊗ ( detF)^m>_m 0 for any integer m≥ 1.
* If 0→ S→ F→ Q→ 0 is an exact sequence of Hermitian vector bundles and F>_m 0, then S⊗ ( detQ)^m>_m 0.
Applying Theorem <ref>, we obtain Corollary <ref>.
plain
|
http://arxiv.org/abs/2307.04451v1 | 20230710100450 | Globally linked pairs of vertices in generic frameworks | [
"Tibor Jordán",
"Soma Villányi"
] | math.CO | [
"math.CO",
"math.MG"
] |
Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu
Victor Prokhorenko1,2 Chadni Islam3 Muhammad Ali Babar1,2
August 12, 2023
==================================================================================================
A d-dimensional framework is a pair (G,p), where G=(V,E)
is a graph and p is a map from V to ℝ^d.
The length of an edge xy∈ E in (G,p) is the distance between p(x) and p(y).
A vertex pair {u,v} of G is said to be globally linked in (G,p) if the
distance between p(u) and p(v) is equal to the distance between q(u) and q(v)
for every d-dimensional framework (G,q) in which the corresponding edge lengths are the same as in (G,p).
We call (G,p) globally rigid in ^d when each vertex pair of G is globally linked in (G,p).
A pair {u,v} of vertices of G is said to be weakly globally linked in G in ^d if
there exists a generic framework (G,p) in which {u,v} is globally linked.
In this paper we first give a sufficient condition for the weak global linkedness of a vertex pair of a
(d+1)-connected graph G in ^d and then show that for d=2 it is also necessary.
We use this result to obtain a complete characterization of weakly globally linked pairs
in graphs in ^2, which gives rise to an algorithm for testing weak global linkedness in the
plane in O(|V|^2)
time. Our methods lead to
a new short proof for the characterization of globally rigid graphs in ^2, and further
results on weakly globally linked pairs and globally rigid graphs in the plane and in higher
dimensions.
§ INTRODUCTION
A d-dimensional framework
is a pair (G,p), where G=(V,E)
is a graph and p is a map from V to ℝ^d.
We also say that (G,p) is a realization of G in
ℝ^d. The length of an edge uv∈ E in
(G,p) is ||p(u)-p(v)||, where ||.|| denotes the Euclidean norm in ℝ^d.
Two frameworks
(G,p) and (G,q) are equivalent if corresponding edge lengths are the same, that is,
||p(u)-p(v)||=||q(u)-q(v)|| holds
for all pairs u,v with uv∈ E.
The frameworks (G,p) and (G,q) are congruent if
||p(u)-p(v)||=||q(u)-q(v)|| holds
for all pairs u,v
with u,v∈ V.
A d-dimensional framework (G,p) is called globally rigid if every equivalent
d-dimensional framework
(G,q) is congruent to (G,p). This is the same as saying that the edge lengths of (G,p)
uniquely determine all the pairwise distances.
It is NP-hard to test whether a given framework in ^d is globally
rigid, even for d=1 <cit.>.
This fundamental property of frameworks becomes more tractable if we consider generic
frameworks. A framework (G,p) (and the set {p(v):v∈ V(G)}) is said to be generic if the set of
its d|V(G)| vertex coordinates is algebraically independent over ℚ.
It is known that in a given dimension
the global rigidity of a generic framework (G,p) depends only on G: either every generic realization of G in ^d is globally rigid, or none of them are
<cit.>.
Thus, we say that a graph G is globally rigid in ^d if every (or equivalently, if some) d-dimensional
generic realization of G is globally rigid in ^d.
For d=1,2, combinatorial characterizations and corresponding
deterministic polynomial time algorithms are known for (testing)
global rigidity in ^d.
The case d=1 is a folklore result:
it is not hard to see that a graph G on at least three vertices is globally rigid in ^1 if and only
if it is 2-connected. The necessary and sufficient conditions for d=2 are stated as Theorem <ref>
in the next section.
The existence of such a characterization
(or algorithm) for d≥ 3 is a major open question.
For more details on globally rigid graphs and frameworks see e.g. <cit.>.
In this paper we consider a refined, local version, in which we are interested in whether the edge lengths
of a framework uniquely determine the distance between a given pair of vertices, rather than all pairs of vertices.
We shall need the following notions. Following <cit.>,
we say that
a pair of vertices {u,v} in a d-dimensional framework (G,p) is globally linked in (G,p) if for every equivalent d-dimensional framework (G,q) we have
||p(u)-p(v)||=||q(u)-q(v)||. Global linkedness in ^d is not a generic property (for d≥ 2):
a vertex pair may be globally linked in some generic d-dimensional realization of G without being globally linked in all generic realizations.
See Figure <ref>.
We say that
a pair {u,v} is globally linked in G in ^d if it is globally linked in all generic d-dimensional frameworks (G,p).
We call a pair {u,v} weakly globally linked in G in ^d if there exists a generic d-dimensional framework (G,p)
in which {u,v} is globally linked. If {u,v} is not weakly globally linked in G, then it is called globally loose in G.
It is immediate from the definitions that G is globally rigid in ^d if and only if each vertex pair is globally linked in G in ^d. As we shall see, the global rigidity of G
already follows from the (seemingly weaker) condition that each vertex pair is weakly globally linked in G (see Lemma <ref>(c)).
The case d=1 is exceptional and well-understood. Global linkedness in ^1 is a generic property: a pair {u,v} is globally linked
in G in ^1 if and only if there is a cycle in G that contains both u and v.
Otherwise {u,v} is globally loose.
For d≥ 2 no combinatorial (or efficiently testable) characterization has previously been found for globally linked or weakly globally linked pairs in graphs in ^d.
These problems belong to the few major problems in combinatorial rigidity which have remained unsolved for d=2.
The main result of this paper is a solution for the weakly globally linked pairs problem in two dimensions.
We shall first give a sufficient condition for the weak global linkedness of a vertex pair of a
(d+1)-connected graph G in ^d (Theorem <ref>)
and then show that in a sense the condition is also necessary in the case of
3-connected graphs in ^2 (Theorem <ref>).
The general case of the two-dimensional problem
is reduced to the 3-connected case by a sequence of lemmas that describe how global linkedness is affected by cutting a graph along a separating pair. These results lead to
the main result (Theorem <ref>), which gives a characterization of weakly globally linked
pairs of vertices in ^2 and gives rise to an O(|V|^2) algorithm for the corresponding decision
problem.
Our methods and results lead to
a new short proof for the sufficiency part of Theorem <ref>.
We also obtain a number of other structural results on weakly globally linked pairs and globally rigid graphs in ^2
and in higher dimensions.
Even though
most of the known results (and conjectures) on global linkedness are concerned with
globally linked pairs of graphs in ^2, their characterization remains open.
Globally linked pairs in two dimensions
have been characterized in minimally rigid graphs <cit.>,
braced maximal outerplanar graphs <cit.>,
and in
R_2-connected graphs <cit.>.
In the latter two cases global linkedness turns out to be a generic property.
Hence these two results
give rise to the characterization of weakly globally linked pairs, too, in the corresponding families
of graphs.
A conjectured characterization of globally linked pairs in ^2 can be found in <cit.>.
A few partial results in higher dimensions are also available, see <cit.>.
The rest of the paper is organized as follows.
In Section <ref>
we introduce the necessary notions concerning rigid graphs and frameworks.
In Section <ref> we prove some simple but fundamental lemmas on weakly
globally linked pairs in ^d.
Section <ref> contains most of the d-dimensional results (two key geometric lemmas
and
a sufficient condition for weak global linkedness), and the new proof for Theorem <ref>.
In Section <ref> we state and prove our main result, a complete characterization of the
weakly globally linked pairs in ^2. In Section <ref> we
discuss the algorithmic aspects and collect a few
concluding remarks and
questions.
§ PRELIMINARIES
In this section we introduce the notions and results from the
theory of (globally) rigid frameworks and graphs that we shall use.
§.§ Rigid graphs and the rigidity matroid
In the structural results on global rigidity and global linkedness the notions of
rigid frameworks, rigid graphs and the rigidity matroid play a key role.
The d-dimensional framework (G,p) is rigid if there exists some ε
>0 such that, if (G,q) is equivalent to (G,p) and
||p(v)-q(v)||< ε for all v∈ V, then (G,q) is
congruent to (G,p).
This is equivalent to requiring that every continuous motion of the vertices of (G,p) in ^d that
preserves the edge lengths takes the framework to a congruent realization of G.
It is known that in a given dimension
the rigidity of a generic framework (G,p) depends only on G: either every generic realization of G in ^d is rigid, or none of them are <cit.>.
Thus, we say that a graph G is rigid in ^d if every (or equivalently, if some) d-dimensional
generic realization of G is rigid in ^d.
For d=1,2, combinatorial characterizations and corresponding
deterministic polynomial time algorithms are known for (testing) rigidity in ^d, see e.g. <cit.>.
The existence of such a characterization
(or algorithm) for d≥ 3 is a major open question.
The following elementary result is well-known.
For the proof of the two-dimensional case see
<cit.>.
Suppose that (G,p) is a rigid generic framework. Then the number of distinct congruence classes of frameworks which are equivalent to (G,p) is finite.
The rigidity matroid of a graph G is a matroid defined on the edge set
of G which reflects the rigidity properties of all generic realizations of
G.
For a general introduction to matroid theory we refer the reader to <cit.>.
Let (G,p) be a realization of a graph G=(V,E) in ^d.
The rigidity matrix of the framework (G,p)
is the matrix R(G,p) of size
|E|× d|V|, where, for each edge uv∈ E, in the row
corresponding to uv,
the entries in the d columns corresponding to vertices u and v contain
the d coordinates of
(p(u)-p(v)) and (p(v)-p(u)), respectively,
and the remaining entries
are zeros.
The rigidity matrix of (G,p) defines
the rigidity matroid of (G,p) on the ground set E
by linear independence of the rows.
It is known that any pair of generic frameworks
(G,p) and (G,q) have the same rigidity matroid.
We call this the d-dimensional rigidity matroid
R_d(G)=(E,r_d) of the graph G.
We denote the rank of R_d(G) by r_d(G).
A graph G=(V,E) is R_d-independent if r_d(G)=|E| and it is an R_d-circuit if it is not R_d-independent but every proper
subgraph G' of G is R_d-independent. We note that in the literature such graphs are sometimes called M-independent in ^d and M-circuits in ^d, respectively.
An edge e of G is an R_d-bridge in G
if r_d(G-e)=r_d(G)-1 holds. Equivalently, e is an R_d-bridge in G if it is not contained in any subgraph of G that is an R_d-circuit.
The following characterization of rigid graphs is due to Gluck.
<cit.>
Let G=(V,E) be a graph with |V|≥ d+1. Then G is rigid in ^d
if and only if r_d(G)=d|V|-d+12.
A graph is minimally rigid in ^d if it is rigid in ^d but
G-e is not rigid in ^d for
every edge e of G.
By Theorem <ref>, minimally rigid graphs in ^d on at least d+1 vertices have exactly d|V| - d+12 edges.
Let G=(V,E) be a graph and {u,v} be a pair of vertices of G.
An induced subgraph G[X] (and the set X), for some X⊆ V, is said to be
(u,v)-rigid in ^d (or simply (u,v)-rigid, if d is clear from the context),
if G[X] is rigid in ^d and u,v∈ X.
We say that
a (u,v)-rigid subgraph
G[X] is
vertex-minimally (u,v)-rigid, if G[X'] is not (u,v)-rigid
for all proper subsets X'⊂ X.
The pair {u,v} is called linked in G in ^d if
r_d(G+uv)=r_d(G) holds. It is known that a pair
{u,v} is linked in G in ^2 if and only if there exists a (u,v)-rigid subgraph of G.
A graph G with at least three edges is called
redundantly rigid in ^d if G-e is rigid
in ^d
for
all e∈ E(G).
Let M be a matroid on ground set E.
We can define a relation on the pairs of elements of E by
saying that e,f∈ E are
equivalent if e=f or there is a circuit C of M
with {e,f}⊆ C.
This defines an equivalence relation. The equivalence classes are
the connected components of M.
The matroid is connected if
it has only one connected component.
A graph G=(V,E) is R_d-connected if R_d(G) is connected.
We shall use the well-known fact that
if v is a vertex of degree at most d in G, then every edge incident with v is an
R_d-bridge in G. Hence the addition of a new vertex of degree d to a rigid graph G
in ^d preserves rigidity.
For more details on the 2-dimensional rigidity matroid, see <cit.>.
§.§ Globally rigid graphs
The following necessary conditions for global rigidity are due to Hendrickson.
<cit.>
Let G be a globally rigid graph in ^d on at least d+2 vertices. Then G is (d+1)-connected and
redundantly rigid in ^d.
For d=1,2
the conditions of Theorem <ref> together are sufficient to imply global rigidity.
It is not the case for d≥ 3.
The characterization of globally rigid graphs in ^2 is as follows.
<cit.>
Let G be a graph on at least four vertices.
Then G is globally rigid in ^2 if and only if G is 3-connected and redundantly rigid in ^2.
An equivalent characterization of global rigidity, in terms of the rigidity matroid of G, follows
from the next lemma.
<cit.>
Let G be a graph with at least two edges.
If G is R_2-connected, then G is redundantly rigid in ^2.
Furthermore, if G is 3-connected and redundantly rigid in ^2, then G is
R_2-connected.
We shall also use the following lemma.
<cit.>
Let G be a rigid, but not redundantly rigid graph in ^2, and suppose that all R_2-bridges of G
are edges of the same triangle in G. Then G is not 3-connected.
§ PROPERTIES OF WEAKLY GLOBALLY LINKED PAIRS IN ^D
We first collect some basic properties that hold in ^d for all d≥ 1.
The following lemma was stated for d=2 in <cit.> but the proof works
for all d≥ 1.
An edge e of a globally rigid graph H is critical if H-e is not
globally rigid.
<cit.>
Let G=(V,E) be a graph and u,v∈ V. Suppose that uv∉ E, and that
G has a globally rigid supergraph in ^d in which uv is a critical edge.
Then {u,v} is globally loose in G in ^d.
We shall frequently use the next key lemma. For a graph G=(V,E) and integer d≥ 1
let
J_d(G)={uv : u,v∈ V, uv∉ E, {u,v} is weakly globally linked in G in ^d}.
Let G=(V,E) be a graph and let F be a set of edges on vertex set V.
Then the following hold.
(a) If G+J_d(G)+F is globally rigid in ^d, then G+F is globally rigid in ^d.
(b) If G+uv is globally rigid in ^d for some uv∈ J_d(G), then G is globally rigid in ^d.
(c) G is globally rigid in ^d if and only if
all pairs of vertices in G are weakly globally linked in ^d.
Let us fix d and put J=J_d(G).
(a) Suppose, for a contradiction, that G+J+F is globally rigid and G+F is not. Then there is a (possibly empty) subset J'⊂ J and an edge uv∈ J-J'
for which G+J'+F is not globally rigid, but G̅=G+J'+F+uv is globally rigid. Then uv is a critical
edge
in G̅, and hence {u,v} is globally loose in G by Lemma <ref>,
a contradiction.
(b) If G+uv is globally rigid for some uv∈ J then G+J is globally rigid. Thus putting F=∅ and applying (a)
gives that G is globally rigid.
(c) Necessity is obvious. If all pairs of vertices in G are weakly globally linked, then G+J is a complete graph, which is globally rigid.
Again, putting F=∅ and applying (a)
gives that G is globally rigid.
It is well-known that if {u,v} is not linked in G in ^d, then every generic d-dimensional realization (G,p) has a flex (i.e. a continuous motion of the vertices that preserves the edge lengths)
to another framework (G,q), for which ||p(u)-p(v)||≠ ||q(u)-q(v)||.
This implies the next lemma.
Let G=(V,E) be a graph and let {u,v} be a non-adjacent vertex pair.
If {u,v} is not linked in G in ^d then {u,v} is globally loose in G in ^d.
Let H=(V,E) be a graph and x,y∈ V. We use κ_H(x,y) to
denote the maximum number of pairwise internally disjoint xy-paths in
H. Note that if xy∉ E then, by Menger's theorem,
κ_H(x,y) is equal to the size of a smallest set S⊆
V-{x,y} for which there is no xy-path in H-S.
The following lemma is the d-dimensional and slightly stronger
version of <cit.>.
Let G=(V,E) be a graph and let {u,v} be a non-adjacent vertex pair
with κ_G(u,v)≤ d. Then {u,v} is globally loose in G in ^d.
Let G_i=(V_i,E_i) be a graph, t ≥ 1 an integer, and suppose that K_i is a complete subgraph of G_i
on t vertices, for i=1,2. Then the t-clique sum operation on G_1,G_2,
along K_1,K_2, creates a new graph G by identifying the vertices of K_1 with the vertices
of K_2, following some bijection between their vertex sets.
The clique sum operation is a t-clique sum operation for some t≥ 1.
In the following lemma sufficiency follows from the simple obervation that if a vertex pair is
weakly globally linked in a subgraph of G, then it is also weakly
globally linked in G. Necessity follows from the fact that the clique sum
operation is performed along a complete (and hence globally rigid) subgraph.
Suppose that G is the clique sum of G_1 and G_2 and let u,v∈ V(G_1).
Then {u,v} is weakly globally linked in G in ^d if and only if
{u,v} is weakly globally linked in G_1 in ^d.
§ A SUFFICIENT CONDITION FOR WEAK GLOBAL LINKEDNESS IN ^D
In this section we provide a new sufficient condition
for the weak global linkedness of a pair of vertices of
a (d+1)-connected graph in ^d.
An important ingredient in our proof is a
geometric lemma (Lemma <ref>) presented in the next subsection. In Subsection <ref> we prove the aforementioned sufficient condition and in Subsection <ref> we show how it can be used to prove the sufficiency part of Theorem <ref>.
In the last subsection
we shall see that an appropriate reverse of Lemma <ref> is also true (Lemma <ref>).
This lemma will be used in the next section where we characterize weak global linkedness in two dimensions.
Roughly speaking these two lemmas show that if a vertex pair {u,v}
belongs to a rigid subgraph H of G, then
the contraction of a
connected subgraph of G-V(H)
does not change the weak global linkedness properties of {u,v}.
§.§ The first contraction lemma
A basic graph operation is the contraction of a subset V_0 of V
in the graph G=(V,E).
This operation, which is denoted by G/V_0, identifies the vertices of V_0 and removes
the loops and parallel copies of the edges of the resulting graph that it may create.
The contraction of an edge e=xy is the contraction of the set {x,y} and it is denoted by G/e.
Let G=(V,E) be a graph, u,v∈ V, and suppose that G[V_0] is a (u,v)-rigid subgraph of G.
Let
e=(s_1,s_2)∈ E-E(G[V_0]) be an edge.
If {u,v} is weakly globally linked in G/e in ^d, then {u,v} is weakly globally linked in G in ^d.
We may assume that G is connected and s_2∉ V_0.
Let s denote the vertex of G/e obtained by identifying s_1 and s_2 in G.
Note that we may have s_1∈ V_0.
In this case we shall
simply identify s with s_1
for notational convenience.
Let (G/e,p) be a generic realization of G/e in which
{u,v} is globally linked.
Let (G,p_i) be a sequence of generic realizations of G, for which
p_i|_V-s_1-s_2=p|_V-s, p_i(s_1)=p(s), and p_i(s_2)→ p(s).
Suppose, for a contradiction, that {u,v} is globally loose in G. Then {u,v} is not globally linked in (G,p_i) for all i≥ 1. Hence for all i≥ 1 there exists
a realization (G,p_i'), equivalent to (G,p_i), for which
||p_i'(u)-p_i'(v)||≠ ||p_i(u)-p_i(v)||=||p(u)-p(v)||.
Since
G[V_0] is rigid and p|_V_0=p_i|_V_0, it follows from Proposition <ref> that there is an
ϵ >0 such that for all i≥ 1,
|||p_i'(u)-p_i'(v)||-||p(u)-p(v)|||≥ϵ.
Since G is connected, we can translate each framework, if necessary, so that for all i≥ 1,
(G,p_i') is in the interior of a ball of radius K, centered at the origin, for some fixed
positive real number K. Thus
there is a convergent subsequence p_i_k'→ p'. Since (s_1,s_2)∈ E, we must have p'(s_1)=p'(s_2).
By extending p'|_V-s_1-s_2 with p'(s)=p'(s_1), we obtain a realization (G/e,p') which is equivalent to (G/e,p). Furthermore, we have
|||p'(u)-p'(v)||-||p(u)-p(v)|||≥ϵ,
which contradicts the fact that {u,v} is globally linked in (G/e,p).
Thus {u,v} is weakly globally linked in G.
We obtain the following sufficient (but not necessary, see Figure <ref>) condition for weak global linkedness as a corollary.
Let G=(V,E) be a graph, u,v∈ V. Suppose that there is some V_0⊂ V such that G[V_0] is a (u,v)-rigid
subgraph of G in ^d, and
there is a uv-path in G that is internally disjoint from V_0. Then
{u,v} is weakly globally linked in G in ^d.
Corollary <ref>, together with Lemma <ref>, leads to short proofs for some previous results on
globally rigid graphs. We illustrate this by the following theorem.
<cit.>
Let G_1 and G_2 be two globally rigid graphs in ^d on at least d+2 vertices,
with exactly d+1 vertices in common. Suppose that e is a common edge. Then G=G_1∪ G_2-e
is globally rigid in ^d.
Let e=uv. Theorem <ref> implies that G_1-e is rigid.
Since G_2 is (d+1)-connected, there is a path from u to v in G that is internally disjoint
from G_1. Thus {u,v} is weakly globally linked in G by Corollary <ref>.
It is easy to see that G+uv is globally rigid. Hence G is also globally rigid by Lemma <ref>.
By using the same proof idea we obtain a simple proof of the “rooted minor" theorem of Tanigawa <cit.>.
§.§ The sufficient condition
Let G=(V,E) be a graph, ∅≠ X⊆ V, and let V_1,V_2,…, V_r be the
vertex sets of the connected components of G-X.
The graph Con(G,X) is obtained from G by contracting each vertex set V_i into a single vertex v_i, 1≤ i≤ r.
The graph Clique(G,X) is obtained from G by deleting the vertex sets V_i, 1≤ i≤ r, and adding
a new edge xy for all pairs x,y∈ N_G(V_i), xy∉ E, for 1≤ i≤ r.
See Figure <ref>.
Let G=(V,E) be a (d+1)-connected graph.
Suppose that G[V_0] is a rigid subgraph of G for some V_0⊆ V.
Then Clique(G,V_0) is globally rigid in ^d
if and only if Con(G,V_0) is globally rigid in ^d.
Let E' be the set of those edges in Clique(G,V_0) that are not in G[V_0]. Let H= Con(G,V_0)+E'. It follows from
Corollary <ref> that {u,v} is weakly globally linked in Con(G,V_0) for all uv∈ E'. Hence, by Lemma <ref>, Con(G,V_0) is globally rigid if and only if H is globally rigid. H can be obtained from Clique(G,V_0) by adding new vertices and joining them to cliques of size at least d+1. Thus H is globally rigid if and only if Clique(G,V_0) is globally rigid.
We are ready to state the main result of this section.
Let
G=(V,E) be a (d+1)-connected graph and u,v∈ V.
Suppose that G[V_0] is a (u,v)-rigid subgraph of G in ^d. If Clique(G,V_0) is globally rigid in ^d,
then {u,v} is weakly globally linked in G in ^d.
Suppose that Clique(G,V_0) is globally rigid in ^d. Then so is Con(G,V_0) by Lemma <ref>. In particular,
{u,v} is weakly globally linked in Con(G,V_0).
Since Con(G,V_0)
can be obtained from G by contracting edges not induced by V_0, Lemma
<ref> gives that {u,v} is weakly globally linked in G.
§.§ Globally rigid graphs - a new proof
Theorem <ref> and Lemma <ref> lead to a new short proof of the sufficiency
part of Theorem <ref>, which only uses
the simple combinatorial Lemmas <ref> and <ref> and
the fact that the global rigidity
of graphs in ^2 is a generic property.
The original proof in <cit.> relies on an inductive construction of 3-connected
R_2-connected graphs.
(of sufficiency in Theorem <ref>)
The proof is by induction on |V|. If |V|=4 then G is a complete graph on four
vertices, which is globally rigid. So we may suppose that |V|≥ 5.
First, we show that for all non-adjacent pairs u,v, there is a (u,v)-rigid proper induced subgraph G[X]
of G. To see this consider two edges e,f∈ E incident with u and v, respectively.
Since G is
3-connected and redundantly rigid, it is
R_2-connected by Lemma <ref>.
Hence there is an R_2-circuit C in G with e,f∈ E(C).
Since |E(C)|=2|V|-2 and d_C(v)≥ 3 for all v∈ V(C), it follows that
C has at least four vertices of degree three. Thus
there is a
vertex w∈ V(C) with w∉{u,v} and d_C(w)=3. Now X=V(C)-w induces the desired
(u,v)-rigid subgraph.
In the rest of the proof we show that every non-adjacent vertex pair {u,v} of G is weakly globally
linked in G. The theorem will follow from this by Lemma <ref>(c).
Let us fix u,v and consider a (u,v)-rigid proper induced subgraph G[X] of G.
As we have shown above, such a subgraph exists.
By Theorem <ref> it suffices to show that Clique(G,X) is globally
rigid.
Let D be the vertex set of a component of G-X and let H be obtained
from G-D by adding a new edge xy for each non-adjacent pair x,y∈ N_G(D). Since G can be obtained from H by attaching a graph along a complete
subgraph, and removing edges, the 3-connectivity of G implies that H
is 3-connected. A similar argument shows that H is rigid, and so is H-e
for every edge e in H not induced by N_G(D).
Thus if H has some R_2-bridges, then they are all induced by
N_G(D). If |N_G(D)|≥ 4, then each edge induced by N_G(D) belongs to
a K_4 in H, so H cannot have R_2-bridges at all.
If |N_G(D)|=3, then every
R_2-bridge in H belongs to the same triangle, on the vertices of N_G(D).
But that is impossible by Lemma <ref>.
Therefore H is a rigid graph with no R_2-bridges,
and hence it is redundantly rigid.
By repeated applications of this argument we obtain that Clique(G,X) is 3-connected and redundantly rigid.
Since |X|≤ |V|-1, we can now use induction to deduce that
Clique(G,X) is globally rigid.
This completes the proof.
We remark that
a different proof for the sufficiency part in
Theorem <ref> was also given by Tanigawa <cit.>. The high level ideas of his proof and the
proof given in this subsection are similar. By using our notation
the main lemma <cit.> can be stated as follows: if v is a vertex of degree at least
d+1 in G,
G-v is rigid in ^d, and Clique(G,V-{v}) is globally rigid in ^d, then G is globally rigid in ^d. This statement is a
special case of
the “only if" direction of our Lemma <ref>.
§.§ The second contraction lemma
As a corollary of Lemma <ref>, it can be deduced that if G[V_0] is a (u,v)-rigid subgraph of a graph G, V_1 is the vertex set of a component of G-V_0 and {u,v} is weakly globally linked in G/V_1, then {u,v} is weakly globally linked in G. In this subsection we shall prove the converse of this statement, see Lemma <ref> below.
We shall need some new notions and an auxiliary lemma.
A configuration of a set U is a function that maps U into ^d.
Two configurations p_1,p_2 of U are said to be congruent if ||p_1(u)-p_1(v)||=||p_2(u)-p_2(v)|| for all u,v∈ U. Suppose that p and q are two incongruent configurations of a set U.
We call a point x∈^d (q,p)-feasible if there exists a point y∈^d such that ||p(u)-x||=||q(u)-y|| for all u∈ U. We then call y a (q,p)-associate of x. Observe that if π is an isometry of ^d, then the set of (q,p)-feasible points is equal to the set of (π∘ q, p)-feasible points.
The affine hull of set X⊆^d will be denoted by Aff(X).
Let p be a configuration of a set U. Suppose that Q={q_1,…,q_k} is a non-empty set of configurations of U
such that q_i is not congruent to p, for all 1≤ i≤ k.
Let F_i be the set of (q_i,p)-feasible points, 1≤ i≤ k.
Then ^d-⋃_i=1^k F_i
is a non-empty open set.
Let S=^d-⋃_i=1^k F_i.
We claim that F_i is closed for every i∈{1,…, k}, which will imply that
S is open.
Let x_j→ x be a convergent sequence with x_j∈ F_i, j∈ℕ,
and let y_j be a (q_i,p)-associate of x_j. The set {y_j:j∈ℕ} is bounded. Hence there exists a convergent subsequence y_j_ℓ→ y. Then y is a (q_i,p)-associate of x, which gives x∈ F_i.
This proves the claim.
In the rest of the proof
we show that S is non-empty. Notice that |U|≥ 2 must hold.
We shall prove the following stronger statement by induction on |U|: for every a∈ U we have
S∩Aff(p(U-{a}))≠∅
First suppose that
|U|=2, and let U={a,b}. Then we have ||q_i(a)-q_i(b)||≠ ||p(a)-p(b)||,
since q_i and p are not congruent. Thus p(b)∈ S, and hence (<ref>) follows.
Next suppose that |U|≥ 3. Let
Q'={q∈ Q: p|_U-{a} is not congruent to q|_U-{a}},
and
let Q”=Q-Q'.
By putting F'=⋃_q_i∈ Q'F_i and F”=⋃_q_i∈ Q”F_i, we have
S=^d-F'-F”.
By induction, the set Aff(p(U-{a,b}))-F' is non-empty for every b∈ U-{a}. Since F' is closed,
this implies that
Aff(p(U-{a}))-F' is non-empty and relatively open in Aff(p(U-{a})).
We claim that for all q_i ∈ Q” the set
F_i∩ Aff(p(U-{a})) is either empty or a proper affine subspace of Aff(p(U-{a})).
To prove the claim, let q_i∈ Q”. By replacing q_i with π∘ q_i, where π is an appropriate isometry of ^d, we may assume that p|_U-{a}= q_i|_U-{a}. Then it follows from the incongruency of p and q_i that p(a)≠ q_i(a). Suppose that x∈ Aff(p(U-{a})) and y is a (q_i,p)-associate of x. Then there exists an isometry that fixes each point of p(U-{a}) and maps x to y.
This isometry fixes each point of Aff(p(U-{a})), therefore, y=x. So the only possible (q_i,p)-associate of x is x itself.
It follows that x is (q_i,p)-feasible if and only if ||x-p(a)||=||x-q_i(a)||, that is, if x is in the bisector hyperplane H of p(a) and q_i(a).
Since q_i and p are not congruent, we obtain Aff(p(U-{a}))⊈H. This proves the claim.
The lemma follows by noting that (<ref>) and (<ref>) yield that the set
S∩Aff(p(U-{a}))=( Aff(p(U-{a}))-F')-F” is non-empty, and hence (<ref>) holds.
Let G=(V,E) be a graph, u,v∈ V, and suppose that G[V_0] is a (u,v)-rigid
subgraph of G.
Let V_1 be the vertex set of some component of G-V_0.
Then {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G/V_1 in ^d.
Since G/V_1 can be obtained from G by contracting edges not induced by V_0, the “if" direction follows by repeated applications of Lemma <ref>.
To prove the “only if" direction suppose that {u,v} is weakly globally linked in G and let (G,p) be a generic realization of G in which {u,v} is globally linked.
Let v_1 be the vertex of G/V_1 obtained by the contraction of V_1 in G.
We shall prove that p|_V-V_1 has an extension to (V-V_1)∪{v_1} that is a generic realization of G/V_1
in which {u,v} is globally linked.
We may assume that {u,v} is not globally linked in (G-V_1,p|_V-V_1), for otherwise we are done by choosing an arbitrary generic extension.
Let q_1,…,q_k be a maximal set of pairwise incongruent configurations of V_0
such that
||q_i(u)-q_i(v)||≠ ||p(u)-p(v)|| and
q_i is a restriction of some realization of G-V_1 which is equivalent to (G-V_1,p|_V-V_1),
for 1≤ i≤ k.
By our assumption k≥ 1.
Proposition <ref> implies
that k is finite, since
G[V_0] is rigid, p is generic and (G[V_0],q_i) is equivalent to (G[V_0],p|_V_0).
For all i∈{1,…,k} the configurations q_i|_N_G(V_1) and p|_N_G(V_1) are incongruent, for otherwise
q_i would be extendible to a configuration q_i' so that (G,q_i') is equivalent to (G,p), contradicting the assumption that {u,v} is globally linked in (G,p).
Applying Lemma <ref> to N_G(V_1), p|_N_G(V_1) and
the set Q={q_1|_N_G(V_1),…, q_k|_N_G(V_1)} gives that there is some x=(x_1,…, x_d)∈^d for which x is not (q_i|_N_G(V_1),p|_N_G(V_1))-feasible for all i∈{1,…,k} and for which p(V-V_1)∪{x} is generic. We can now complete the proof of the lemma by considering the generic realization (G/V_1,p), where
p|_V-V_1=p|_V-V_1 and p(v_1)=x.
Then
{u,v} is globally linked in (G/V_1,p).
Indeed, the existence of an equivalent realization (G/V_1,q) with ||q(u)-q(v)||≠ ||p(u)-p(v)|| would imply that q|_V_0=q_i for some 1≤ i≤ k and that
x is (q_i|_N_G(V_1),p|_N_G(V_1))-feasible, contradicting the choice of x.
Let G=(V,E) be a graph, u,v∈ V, and let G[V_0] be a (u,v)-rigid subgraph of G.
Let e=(s_1,s_2)∈ E be an edge with s_1,s_2∉ V_0. Notice that Lemma <ref> implies that {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G/e in ^d: each of these two conditions is equivalent to the condition that {u,v} is weakly globally linked in G/V_1, where V_1 is the connected component of G-V_0 that contains e. By the same argument, for any connected subgraph G_1 of G-G_0, {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G/V(G_1) in ^d.
§ WEAKLY GLOBALLY LINKED PAIRS IN ^2
In this section we focus on the d=2 case.
Thus, we shall occasionally write that a graph is (globally) rigid to mean that it is (globally) rigid in ^2, and we may similarly omit the dimension when referring to global linkedness of vertex pairs in graphs.
This section contains one of our main results, a characterization of weakly globally
linked pairs in graphs.
§.§ Weakly globally linked pairs in 3-connected graphs
We start with the special case of 3-connected graphs. By Lemma <ref> it suffices to consider
non-adjacent
linked pairs {u,v} of G, or equivalently,
pairs {u,v} for which there exists some subgraph G_0=(V_0,E_0) of G with u,v∈ V_0 such that G_0+uv is an R_2-circuit.
Let
G=(V,E) be a 3-connected graph and u,v∈ V with uv∉ E.
Suppose that G_0=(V_0,E_0) is a subgraph of G with u,v∈ V_0 such that G_0+uv is an R_2-circuit. Then
{u,v} is weakly globally linked in G in ^2 if and only if
Clique(G,V_0) is globally rigid in ^2.
Since G[V_0] is rigid, sufficiency follows from Theorem <ref>.
To prove the other direction
suppose, for a contradiction, that {u,v} is weakly globally linked in G
but Clique(G,V_0) is not globally rigid.
Since G_0+uv is redundantly rigid,
so is
Clique(G,V_0)+uv. The 3-connectivity of G implies
that Clique(G,V_0) is 3-connected. Thus
Clique(G,V_0)+uv is globally rigid by Theorem <ref>.
Hence {u,v} is globally loose in Clique(G,V_0) by Lemma <ref>.
As G can be obtained from Clique(G,V_0) by clique sum operations and removing edges,
Lemma <ref> implies that {u,v} is globally loose in G, a contradiction.
§.§ Weakly globally linked pairs and 2-separators
In this subsection we shall prove some lemmas that
describe, among others, how weak global linkedness is affected when
the graph is cut into two parts along a separating pair of vertices.
These lemmas will enable us to reduce the question of
whether a linked pair of vertices in a graph G is weakly globally linked to the case
when G is 3-connected.
We shall also need the following extension of <cit.>.
Let G=(V,E) be a rigid graph, ab∈ E an R_2-bridge in G,
and u,v∈ V with uv∉ E.
Suppose that
G has no (u,v)-rigid proper induced subgraph.
Then {u,v} is globally loose in G.
Let H=(V,B) be a minimally rigid spanning subgraph of G. Since ab is an R_2-bridge, we have
ab∈ B.
The graph H+uv contains a unique R_2-circuit C with u,v∈ V(C). Since
G has no (u,v)-rigid proper induced subgraph, C=H+uv must hold.
Let (G,p_0) be a generic realization of G.
By <cit.>
the generic framework
(H-ab,p_0) has an equivalent realization (H-ab,p_1), which can be obtained by a flexing, and for which
||p_0(u)-p_0(v)||≠ ||p_1(u)-p_1(v)|| and ||p_0(a)-p_0(b)||= ||p_1(a)-p_1(b)||.
Consider an edge xy∈ E-B. Since H is rigid, xy belongs to an R_2-circuit C' of H+xy.
Moreover,
ab is an R_2-bridge in G (as well as in its subgraph H+xy), and hence C' does not contain ab.
Thus there is a
rigid subgraph of H-ab (namely, C'-xy), which contains x and y.
Hence the flexing does not
change the distance between x and y. Therefore
(G,p_1) is equivalent to (G,p_0). Since the distances between u and v are different in
these realizations, it follows that {u,v} is globally loose in G.
Let
G=(V,E) be a rigid graph, let z∈ V with
N_G(z)={x,y}, and let u,v∈ V-{z}. Then {u,v}
is weakly globally linked in G-z+xy if and only if
{u,v} is weakly globally linked in G.
Let G_1=G-z+xy.
Observe that G_1 is isomorphic to G/zx. Since
G-z is rigid, we can use Lemma <ref> to deduce that if {u,v}
is weakly globally linked in G_1, then {u,v} is weakly globally linked in G.
To prove the other direction suppose that {u,v} is weakly globally linked in G.
Then it is also weakly globally linked in G+xy.
Since G+xy is the clique sum of G_1 and a copy of K_3,
Lemma <ref> implies that {u,v} is weakly globally linked in G_1.
A pair (a,b) of vertices of a 2-connected graph H=(V,E) is called
a 2-separator if H-{a,b} is disconnected.
Let G=(V,E) be a rigid graph with |V|≥ 4 and
(a,b) be a 2-separator in G.
Let C be a connected component of G-{a,b} and
let V_0=V(C)∪{a,b}. Suppose that u,v∈ V_0.
Then {u,v} is weakly globally linked in G if and only if {u,v} is weakly globally linked in G[V_0]+ab.
If {u,v} is weakly globally linked in G, then it is easy to see, by using
Lemma <ref>, that {u,v} is weakly globally linked in G[V_0]+ab.
To prove that if {u,v} is weakly globally linked in G[V_0]+ab, then {u,v} is weakly globally linked in G,
we use induction on |V|. If |V|=4, then we must have G=K_4-e and uv∈ E, so the statement is obvious.
Suppose that |V|≥ 5.
If there exists a (u,v)-rigid subgraph of G[V_0], then, since
G[V_0]+ab can be obtained from G by a sequence of edge contractions, we can use
Lemma <ref> to deduce that {u,v} is weakly globally linked in G.
So in the rest of the proof we may assume that
G[V_0] has no (u,v)-rigid subgraph.
In particular, G[V_0] is not rigid. Hence, by the rigidity of G,
it follows that {a,b} is not linked in G[V_0] and
ab is an R_2-bridge in G[V_0]+ab.
Since {u,v} is weakly globally linked in G[V_0]+ab, Lemma <ref> implies that
there exists a (u,v)-rigid proper induced subgraph
G'=(V',E') of G[V_0]+ab.
Suppose that G' is vertex-minimal.
By (<ref>) we obtain ab∈ E' and a,b⊂ V'.
We consider three cases depending on the structure of
G[V_0]-V'.
Since G' is a proper induced subgraph, we have V_0-V'≠∅. See Figure <ref>.
Case 1: G[V_0]-V' has a component Z with |V(Z)|≥ 2.
By Lemma <ref> {u,v} is weakly globally linked in (G[V_0]+ab)/Z.
Since G and G' are rigid, G-V(Z) is also rigid, and Z has at least two neighbours in G.
Hence
G/Z is rigid.
Thus we obtain, by induction, that
{u,v} is weakly globally linked in G/Z. By using that G-V(Z)
is rigid, Lemma <ref>
gives that {u,v} is weakly globally linked in G.
Case 2: Each component of G[V_0]-V' is a singleton and there exists a vertex z∈ V_0-V' with d_G(z)=2.
Let N_G(z)={x,y}. By Lemma <ref> {u,v} is weakly globally linked in (G[V_0]+ab)-z+xy.
If {u,v}={a,b} and |V_0|=3, then {u,v} is weakly globally linked in G
by Lemma <ref>.
So we may assume that |V_0|≥ 4, and hence
(a,b) is a 2-separator of the rigid graph G-z+xy.
Hence
{u,v} is weakly globally linked
in G-z+xy by induction.
By using that G is rigid,
Lemma <ref> implies that {u,v} is weakly globally linked
in G.
Case 3: Each component of G[V_0]-V' is a singleton and for each z∈ V_0-V' we have d_G(z)≥ 3.
We claim that for each z∈ V_0-V' and x,y∈ N_G(z)
there is a rigid subgraph of G'-ab which contains x and y.
To see this let w be another neighbour of z, different from x,y,
and let G” be obtained from G'-ab by adding vertex z and edges zx,zy,zw.
The three edges incident with z in G” cannot be R_2-bridges, since it
would imply, by using the rigidity of G' and computing ranks,
that G” is (u,v)-rigid, contradicting (<ref>).
Thus there is an R_2-circuit C in G” containing z. Then C must contain
x and y, too, and C-z is a rigid subgraph of G'-ab which contains x and y, as claimed.
The minimality of G' implies that it has no (u,v)-rigid proper induced subgraph.
Let (G[V_0]+ab,p) be a generic realization.
By (the proof of) Lemma <ref> (G'-ab,p|_V') has an equivalent realization (G'-ab,q), for which ||p(u)-p(v)||≠ ||q(u)-q(v)||, ||p(a)-p(b)||= ||q(a)-q(b)||, and such
that the distances between the linked pairs of G'-ab are the same in the two realizations.
Then, since
each pair of neighbours of every z∈ V_0-V' is
linked in
G'-ab, it follows that (G'-ab,q)
can be extended to a realization (G[V_0]+ab,q') that is equivalent to (G[V_0]+ab,p).
Hence {u,v} is
globally loose in G[V_0]+ab, a contradiction. This completes the proof.
We next extend Lemma <ref> to 2-connected graphs.
Let G=(V,E) be a 2-connected graph and {u,v} be a linked pair of vertices of G. Suppose that (a,b) is a 2-separator of G. Let C be a connected component of G-{a,b}, and let V_0=V(C)∪{a,b}. Suppose that u,v∈ V_0. Then {u,v} is weakly globally linked in G if and only if {u,v} is weakly globally linked in G[V_0]+ab.
If {u,v} is weakly globally linked in G, then it follows from Lemma <ref> that {u,v} is weakly globally linked in G[V_0]+ab. To prove the “if" direction suppose that {u,v} is weakly globally linked in G[V_0]+ab. Since {u,v} is a linked pair, there is a
(u,v)-rigid induced subgraph G[U] of G. If {a,b}⊈U, then U is a subset of V_0
and G[V_0]+ab can be obtained from G by contracting edges which are not induced by U. Thus
{u,v} is weakly globally linked in G by Lemma <ref>. So we may suppose that {a,b}⊆ U. Let A_1,…, A_k be the components of G-U contained in V_0, and let B_1, …, B_l be the components of G-U not contained in V_0.
Observe that the rigidity of G[U] and the 2-connectivity of G imply that
G/A_1/… /A_k/B_1/…/ B_k is rigid. Hence we have that
{u,v} is weakly globally linked in G
⇔ {u,v} is weakly globally linked in G/A_1/… /A_k/B_1/…/ B_k
⇔ {u,v} is weakly globally linked in G[V_0]/A_1/…/A_k + ab
⇔ {u,v} is weakly globally linked in G[V_0]+ab,
where the first and third equivalence follows from Lemma <ref> and the second equivalence follows from Lemma <ref>, using the rigidity of G/A_1/… /A_k/B_1/…/ B_k.
The next lemma on the weak global linkedness of linked separating pairs
follows from
Lemma <ref> by putting {a,b}={u,v}.
It can also be deduced from Lemma <ref> by using that
there is some component C of G-{u,v} for which {u,v} is linked in G[V(C)∪{u,v}].
Let G=(V,E) be a 2-connected graph, and u,v∈ V be a linked pair of vertices for which (u,v) is a 2-separator in G. Then {u,v} is weakly globally linked in G.
We use the following operation to eliminate 2-separators. Let G=(V,E) be a 2-connected graph, let
(a,b) be a 2-separator in G, and let
C be a connected component of G-{a,b}.
We say that the graph G[V(C)∪{a,b}]+ab (when ab∉ E) or G[V(C)∪{a,b}] (when ab∈ E)
is obtained from G by a
cleaving operation along (a,b). The graph G̅ obtained from G by adding every edge ab, for which ab∉ E and (a,b) is a 2-separator of G,
is called the augmented graph of G.
The following lemma is easy to show by induction, using the cleaving operation.
Let G=(V,E) be a 2-connected graph and let {u,v} be a non-adjacent vertex pair in G
with κ_G(u,v)≥ 3. Then either (u,v) is a separating pair in G or there is a unique
maximal 3-connected subgraph B of G̅ with {u,v}⊂ V(B).
In the latter case
the subgraph B can be obtained from G by a sequence of cleaving operations.
Furthermore, uv∉ E(B), and
if the pair {u,v} is linked in G then it is also linked in B.
The subgraph B in Lemma <ref> is called the 3-block of {u,v} in G.
We are ready to state the
main result of this section: a complete characterization of the non-adjacent weakly globally linked pairs in a graph G.
By Lemma <ref> and Lemma <ref>
we may assume that {u,v} is linked and κ_G(u,v)≥ 3 (for otherwise {u,v} is globally loose).
By Lemma <ref> we may also assume that
G is 2-connected.
Let G=(V,E) be a 2-connected graph and let {u,v} be a non-adjacent linked pair of vertices
with κ_G(u,v)≥ 3.
Then {u,v} is weakly globally linked in G if and only if either
(i) (u,v) is a separating pair in G, or
(ii) Clique(B,V_0) is globally rigid,
where B is the 3-block of {u,v} in G, and B_0=(V_0,E_0) is a subgraph
of B with u,v∈ V_0 such that B_0+uv is an R_2-circuit.
The proof is by induction on the number h of vertex pairs x,y∈ V with κ_G(x,y)=2.
If h=0, then B=G and (ii) holds by Theorem <ref>.
Suppose that h≥ 1 and let (a,b) be a 2-separator in G. If {a,b}={u,v} then
Lemma <ref> applies and (i) holds.
Otherwise we can use Lemmas <ref>, <ref>, and induction, to
complete the proof.
See Figure <ref> for an illustration of
Theorem <ref>.
§ CONCLUDING REMARKS
§.§ Algorithmic aspects
Theorem <ref> and its proof shows that weak global linkedness of a vertex pair
{u,v} in a graph G=(V,E) can be tested in O(|V|^2) time, as efficient
algorithms are available for each of the required subroutines. Basic graph algorithms can
be used to test, in linear time, whether κ_G(u,v)≥ 3 holds and to find the maximal
2-connected block that contains u,v. After reducing the problem to the 2-connected case,
the linear time algorithm of <cit.> can be applied to check whether (u,v) is a separating pair and (when it is not)
to identify the 3-block B of {u,v}. (Note that B coincides with one of the
so-called cleavage units of G.) Computing Clique(G,X) for a given X⊆ V is also easy.
Testing whether {u,v} is linked, and (when it is linked) finding an R_2-circuit of G+uv containing uv
can be done in O(|V|^2) time <cit.>. Within the same time bound, we can test
whether a graph is globally rigid, see e.g. <cit.>.
§.§ Higher dimensions
Most questions concerning
the higher dimensional versions of (weak) global linkedness are open.
Partial results can be found in <cit.>.
A fairly natural new question is whether
the sufficient condition of weak
global linkedness given in Theorem <ref> is also necessary for d≥ 3, in the sense that for every weakly globally linked pair {u,v} of a graph G there exists a (u,v)-rigid induced subgraph G[X] of G such that Clique(G,X) is globally rigid. We are not aware of
any counter-examples.
Finding extensions and stronger versions of our results is another promising research
direction. Here we mention one example.
If we use
Lemma <ref> (suggested by D. Garamvölgyi) in place of Proposition <ref> in the proof, we obtain
the following strengthening of Lemma <ref>
to linked pairs: if G[V_0] is a subgraph of G in which {u,v} is linked, e∈ E-E(G[V_0]) and {u,v} is weakly globally linked in G/e, then {u,v} is weakly globally linked in G.
Note that
a pair {u,v} may be linked in a graph G_0 in ^d, d≥ 3, even if G_0 contains no (u,v)-rigid subgraph.
For the definitions of the new notions appearing in the next proof see e.g. <cit.>.
Let {u,v} be a linked pair in a graph G in ^d and let (G,p) be a generic realization of G
in ^d. Then the set
{ ||q(u) - q(v)|| : (G,q) is equivalent to (G,p) }
is finite.
Suppose, for a contradiction, that there exists an infinite sequence of frameworks
(G,q_i), i≥ 1, equivalent to (G,p), in which the
distances ||q_i(u) - q_i(v)|| are pairwise different.
We may assume that G is connected and q_i(u) is the origin for all i≥ 1.
Then each (G,q_i) is in the interior of a ball of radius K, for some constant K.
Thus, by choosing a subsequent, if necessary, we may assume that (G,q_i) is convergent, with limit (G,q).
Since (G,q) is equivalent to (G,p), and (G,p) is generic, the two frameworks
(G,p) and (G,q) have the same
equilibrium stresses by <cit.>.
In particular, the rank of the rigidity matrix of (G,q) is equal to the maximum (generic) rank of G.
This fact, and the linkedness of {u,v} imply that the ranks of the rigidity matrices
of (G+uv,q) and (G,q) are the same.
So their kernels are the same, too. Thus every infinitesimal motion x:V→^d of (G,q) satisfies
(q(u)-q(v))^T(x(u)-x(v)) = 0. By continuity this
holds for all frameworks in a small enough
neighbourhood of (G,q).
Consider the frameworks
q'_i = (q_i+1 + q_i)/2. They converge to q, and the well-known avaraging technique shows that x_i = (q_i+1 - q_i) is an infinitesimal motion of q'_i for all i≥ 1 (for a proof see e.g. <cit.>).
The same calculations show that, since ||q_i+1(u)-q_i+1(v)||≠ ||q_i(u)-q_i(v)||, we have (q'_i(u) - q'_i(v))^T(x_i(u)-x_i(v))≠ 0, a contradiction.
§.§ Minimally globally rigid graphs
A graph G=(V,E) is called minimally globally rigid in ^d if it is globally rigid in ^d and for every edge e∈ E the graph G-e is not globally rigid in ^d.
Garamvölgyi and Jordán <cit.> proved that
if G=(V,E) is minimally globally rigid in ^d and |V|≥ d+1, then
|E|≤ (d+1)|V|-d+22.
Moreover, as it is noted in <cit.>,
for every globally rigid graph G in ^d on at least d+1 vertices, and for
every minimally rigid spanning subgraph G_0 of G, there exists a globally rigid spanning
subgraph of G that contains G_0 and has at most (d+1)|V|-d+22 edges.
Furthermore, the authors conjecture that a minimally globally rigid
graph in ^d is in fact R_d+1-independent, see <cit.>.
The truth of this
conjecture would imply that a minimally globally rigid graph G=(V,E) in ^d is not only sparse, but every subgraph of G is sparse:
for each U⊆ V with |U|≥ d+1 we have |E(U)|≤ (d+1)|U|- d+22.
This conjecture was verified for d=2 in <cit.>.
Next we prove this upper bound for all d,
in the special case when the subgraph induced by U is rigid.
Let G=(V,E) be a minimally globally rigid graph in ^d. Suppose that U⊆ V, |U|≥ d+1 and G[U] is rigid. Then |E(U)|≤ (d+1)|U|- d+22.
Let G_0=(U,E_0) be a minimally rigid spanning subgraph of G[U]. Since G is globally rigid, so is Clique(G,U). Thus, by the results of <cit.>, there is a globally rigid spanning subgraph G'=(U,E') of Clique(G,U) that contains G_0 and has at most (d+1)|U|-d+22 edges. Suppose, for a contradiction, that there is some edge e=uv∈ E(U)-E'.
Note that G[U]-e is rigid.
Then G' is a subgraph of Clique(G-e,U), and hence {u,v} is weakly globally linked in G-e by Theorem <ref>.
Since e is critical in G, this contradicts Lemma <ref>. It follows that G[U] is a subgraph of G'; therefore |E(U)|≤ |E'|≤ (d+1)|U|-d+22.
For d=2 we can extend Theorem <ref> to all subsets U⊆ V with |U|≥ d+1.
As we noted above, this two-dimensional result is not new, as it follows from
<cit.>. Here we give a new proof in order to illustrate how Theorem <ref> might be applied to attack the d-dimensional case.
<cit.>
Let G=(V,E) be a minimally globally rigid graph in ^2. Suppose that U⊆ V and |U|≥ 3. Then |E(U)|≤ 3|U|- 6.
By Theorem <ref> the statement is true if G[U] is rigid. Suppose that G[U] is not rigid, that is, r_2(G[U])≤ 2|U|-4.
We may assume that G[U] has no isolated vertices.
It is well-known (see e.g. <cit.>) that
for the collection G_i=(V_i,E_i), 1≤ i≤ k, of the maximal rigid subgraphs of G[U] we have
∑_i=1^k (2|V_i|-3)=r_2(G[U]).
For an integer h≥ 2 let
f(h) = 3h-6, if h≥ 3, and let f(h)=1 otherwise.
Then we have
|E(U)|= ∑_i=1^k |E_i| ≤∑_i=1^k f(|V_i|)≤∑_i=1^k 3/2(2|V_i|-3)=3/2r_2(G[U])≤ 3|U|-6,
where the first inequality follows from Theorem <ref>.
§ ACKNOWLEDGEMENTS
This research has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the ELTE TKP 2021-NKTA-62 funding scheme. The first author was also
supported by the Hungarian Scientific Research Fund grant no. K135421, and
the MTA-ELTE Momentum Matroid Optimization Research Group.
We thank Dániel Garamvölgyi for several useful remarks, and for suggesting Lemma <ref>. We also thank Csaba Király for his comments.
99
AR L. Asimow and B. Roth, The rigidity of graphs, Trans. Amer. Math. Soc., 245 (1978), pp. 279-289.
BJ A.R. Berg and T. Jordán, Algorithms for graph rigidity and scene analysis,
Proc. 11th Annual
European Symposium on Algorithms (ESA) 2003, (G. Di Battista and U. Zwick, eds) Springer
LNCS 2832, pp. 78-89, 2003.
Con R. Connelly,
Generic global rigidity,
Discrete Comput. Geom. 33:549-563 (2005).
Conmerge R. Connelly,
Combining globally rigid frameworks,
Proc. of the Steklov Institute of Mathematics,
275, 191-198, 2011.
coning R. Connelly and W. Whiteley,
Global rigidity: the effect of coning,
Discrete Comput Geom (2010) 43: 717–735.
GJunitball
D. Garamvölgyi and T. Jordán, Global rigidity of unit ball
graphs, SIAM J. Discrete Math. 34:1, pp. 212-229, 2020.
GJcccg
D. Garamvölgyi and T. Jordán, Globally linked pairs in
braced maximal outerplanar graphs, Proc. CCCG 2022, Toronto, August 2022, pp. 162-168.
GJpartial
D. Garamvölgyi and T. Jordán, Partial reflections and globally linked
pairs in rigid graphs, arXiv:2305.03412, May 2023.
GJmgr D. Garamvölgyi and T. Jordán, Minimally globally
rigid graphs, European J. Combin., Vol. 108., 103626, 2023.
Gluck H. Gluck,
Almost all simply connected closed surfaces are rigid,
Geometric topology (Proc. Conf., Park City, Utah, 1974),
pp. 225–239.
Lecture Notes in Math., Vol. 438,
Springer, Berlin, 1975.
GHT S. Gortler, A. Healy, and D. Thurston,
Characterizing generic global rigidity,
American Journal of Mathematics, Volume 132, Number 4, August 2010,
pp. 897-939.
hend B. Hendrickson,
Conditions for unique graph realizations,
SIAM J. Comput. 21 (1992), no. 1, 65-84.
HT J.E. Hopcroft and R.E. Tarjan, Dividing a graph into triconnected
components, SIAM J. Comput. 2 (1973), 135–158.
JJconnrig B. Jackson and T. Jordán,
Connected rigidity matroids and unique
realizations of graphs,
J. Combin. Theory Ser. B, Vol. 94, 1-29, 2005.
JJS B. Jackson, T. Jordán, and Z. Szabadka,
Globally linked pairs of vertices in equivalent realizations of
graphs, Discrete Comput. Geom., Vol. 35,
493-512, 2006.
JJS2 B. Jackson, T. Jordán, and Z. Szabadka,
Globally linked pairs of vertices in rigid frameworks, in:
Rigidity and Symmetry,
Fields Institute Communications, Vol. 70,
R. Connelly, A. Ivic Weiss, W. Whiteley (Eds.) 2014, pp. 177-203.
JmemoirsT. Jordán,
Combinatorial rigidity: graphs and matroids
in the theory of rigid frameworks. In: Discrete Geometric Analysis,
MSJ Memoirs, vol. 34, pp. 33-112, 2016.
JKTT. Jordán, Cs. Király, and S. Tanigawa, Generic global rigidity of body-hinge frameworks,
J. Combin. Theory, Series B 117, 59-76, 2016.
JW T. Jordán and W. Whiteley,
Global rigidity, in J. E.
Goodman, J. O'Rourke, and C. D. Tóth (eds.), Handbook of Discrete and Computational
Geometry, 3rd ed., CRC Press, Boca Raton, pp. 1661-1694, 2018.
JT T. Jordán and S.Tanigawa, Global rigidity of triangulations
with braces, J. Comb. Theory Ser. B., 136, pp. 249-288 (2019).
KM Cs. Király and A. Mihálykó,
Fast algorithms for sparsity matroids and the global rigidity augmentation problem, Egerváry Research Group, Budapest,
TR-2022-05, 2022.
laman G. Laman,
On graphs and rigidity of plane
skeletal structures,
J. Engineering Math. 4 (1970),
331-340.
oxley J.G. Oxley,
Matroid theory,
Oxford Science Publications.
The Clarendon Press, Oxford University Press, New York, 1992. xii+532 pp.
Saxe J.B. Saxe, Embeddability of weighted graphs
in k-space is strongly NP-hard, Technical report, Computer Science Department,
Carnegie-Mellon University, Pittsburgh, PA, 1979.
SW B. Schulze and W. Whiteley,
Rigidity and scene analysis, in
J.E. Goodman, J. O'Rourke, C.D. Tóth (eds.),
Handbook of Discrete and Computational
Geometry, 3rd ed., CRC Press, Boca Raton,
2018.
Tani S. Tanigawa, Sufficient conditions for the global rigidity of graphs,
J. Combin. Theory, Ser. B., Vol. 113, July 2015, Pages 123-140.
|
http://arxiv.org/abs/2307.09566v1 | 20230714210114 | Fast design and scaling of multi-qubit gates in large-scale trapped-ion quantum computers | [
"Yotam Shapira",
"Lee Peleg",
"David Schwerdt",
"Jonathan Nemirovsky",
"Nitzan Akerman",
"Ady Stern",
"Amit Ben Kish",
"Roee Ozeri"
] | quant-ph | [
"quant-ph"
] |
^1Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 7610001, Israel
^2 Department of Physics of Condensed Matter Systems, Weizmann Institute of Science, Rehovot 7610001, Israel
^3 Quantum Art LTD, Ness Ziona 7403682, Israel
^* These authors contributed equally to this work
Quantum computers based on crystals of electrically trapped ions are a prominent technology for quantum computation. A unique feature of trapped ions is their long-range Coulomb interactions, which come about as an ability to naturally realize large-scale multi-qubit entanglement gates. However, scaling up the number of qubits in these systems, while retaining high-fidelity and high-speed operations is challenging. Specifically, designing multi-qubit entanglement gates in long ion crystals of 100s of ions involves an NP-hard optimization problem, rendering scaling up the number of qubits a conceptual challenge as well. Here we introduce a method that vastly reduces the computational challenge, effectively allowing for a polynomial-time design of fast and programmable entanglement gates, acting on the entire ion crystal. We use this method to investigate the utility, scaling and requirements of such multi-qubit gates. Our method delineates a path towards scaling up quantum computers based on ion-crystals with 100s of qubits.
Fast design and scaling of multi-qubit gates in large-scale trapped-ion quantum computers
Roee Ozeri^1
August 12, 2023
=========================================================================================
§ INTRODUCTION
Trapped ion quantum computers are a leading quantum computation platform, owing its success to the accurate control of individual ions, long-range connectivity and long coherence times. Despite their all-to-all connectivity, linear ion crystals of growing length present increasing difficulty in implementing high-fidelity and high-speed entanglement gates. Some trapped-ion scale-up architectures circumvent this challenge by interconnecting separate ion crystals, either by ion shuttling between segments in a quantum charge coupled device (QCCD) architecture <cit.> or by photonic interconnects <cit.>. However, both these approaches to scalability will benefit from working with longer ion crystals as their basic building block, by taking full advantage of the inherent long-range connectivity of the ions and its expected benefits <cit.>.
Quantum information processing devices, based on crystals of 10s to 100s of trapped ions, have recently been implemented <cit.>, overcoming hurdles such as crystal stability, cooling and coherence. Nevertheless, a prominent challenge which remains unaddressed is the design of multi-qubit entangling gates that are not hindered by the overwhelming spectral density of the normal modes of motion in large ion crystals. Specifically, designing the required control signals that generate high-fidelity, programmable, fast and robust multi-qubit entangling gates is a quadratically constrained NP-hard optimization problem <cit.>, making the study of feasibility and scaling of large ion crystals a formidable challenge.
Here we introduce a method, coined large-scale fast (LSF), which efficiently designs multi-qubit entangling gates for large-scale ion crystals, enabling scaling up the trapped ion quantum processors to 100s of qubits in a single crystal. We show that a solution of a special instance of the optimization problem can be efficiently converted, using a linear transformation and local optimisations, to any other required entangling operation on the same system. Thus we find suitable approximate solutions in polynomial time. We use the LSF method to efficiently generate programmable multi-qubit XX-type entanglement operators and accumulate performance statistics of various coupling geometries such as all-to-all interactions, surface code stabilizer measurements, parallel pairwise gates, among other examples. We highlight that programmable XX entangling gates, also known as 'Ising' or 'global tunable' gates, have been shown to be advantageous for improving the performance of quantum error correction codes <cit.>, as well as for compilation of quantum Fourier transforms <cit.>, Clifford unitary operators with a gate count that is independent of the qubit register size and N-qubit Toffoli gates <cit.>.
We further show that while a crystal of N ions can have 𝒪(N^2) types of different two-qubit gate interactions, naively resulting in a problem with dimension N^2, it is in fact sufficient to only solve N quadratic constraints in order to find solutions to any required XX gate. This enables efficient study of the scaling of various properties, such as gate time, fidelity and required power, with ion-crystals of 100's of ions.
The LSF method has enabled us to study many types of couplings in large ion crystals and investigate their performance. This has generated a better understanding of the application of the entanglement operations, namely their advantages, limits and required resources. We show that the minimal entanglement time is determined by the smallest difference between the frequency of motional modes which are used in the gate, Δν_<. That is, T_min=2πΔν_<^-1. This scaling is intuitive as it corresponds to the time it takes the slowest phonon wave-packets to traverse the entire ion crystal. For transverse modes of N equally-spaced ions this implies that T_min∝ N^2. Below that time scale, the solution we find requires a divergent power, and does not reach high gate fidelity.
Our analysis also presents an estimate of the power that is required in order to drive different gates. Specifically, we show that the power required to drive an arbitrary multi-ion entanglement operation can be predicted by extrapolating, from the power required for driving the entanglement operation in the adiabatic regime, on a different ion-crystal system in which this operation can be performed with global driving beams <cit.>. That is, many system details, such as ion participation in the modes of motion, or ion-qubit mapping, do not substantially affect the required power. This is quantified by a nuclear norm estimate, Ω_nuc, of the coupling matrix φ_n,m, that determines the entangling phase between ions n and m, i.e. the sum of its eigenvalues in absolute value, detailed below.
Figure <ref> exhibits these results showing the total Rabi frequency required to drive entanglement gates, which vary in number of ions in the ion-crystal (color) and the types of gates used, as a function of the entanglement time. Each point in the plot corresponds to a specific crystal size and desired entanglement operation, and shows the gate performance averaged over 50 to 150 different solutions. We highlight that LSF enables a straightforward design of multi-qubit entangling gates over a N=100 qubit register. Both axes are scaled appropriately, such that all the data collapses approximately on the same unity slope line, showing we are able to generate correct estimates for the required gate power, as well as the minimal time for which our method is effective.
Figure <ref> also highlights a specific realization (green star) which implements the entanglement gate necessary in order to perform a parallel stabilizer measurement of all relevant qubits, for a N=49 quantum error correction surface code. This realization is analyzed with further details below.
Lastly, while the character of the normal-modes of motion of trapped ion crystals is global, i.e. in general all ions participate in all modes, we show that the application of our method results in a reduction of the motion of ions which are not used in the operation. This is in stark contrast to the conventional method of entangling ions using a center-of-mass mode of motion, in which ions which are not driven and remain decoupled from the gate, are nevertheless displaced by the same amount as ions which are driven and coupled.
The remainder of the paper is ordered as follows, we first describe the derivation of the LSF method, then we perform an analysis of the total required Rabi frequency and derive the nuclear norm based estimate. Lastly, we focus on a specific realization (green star in Fig. <ref>) in order to highlight certain aspects of its operation.
§ DERIVATION OF THE LARGE-SCALE FAST METHOD
Trapped ion quantum computers use the normal-modes of motion of the ion crystal as a phonon bus, which mediates interactions between the ion qubits. This is performed by driving the motional sidebands of the ion crystal, which generates spin-dependent forces. A canonical example of this method is the Mølmer-Sørensen (MS) gate <cit.>.
In recent years there have been many proposals and demonstrations which were focused on improving the utility and fidelity of MS gates. These methods are, at large, based on modulating spin-dependent displacement forces. This modulation may be implemented with various methods such as amplitude <cit.>, frequency <cit.> or phase modulation <cit.> of the fields driving the ions. Using the LSF method we analyze the problem in the spectral domain <cit.>. That is, we fix a discrete set of frequency components of the radiation field driving the ions, and choose the (complex) amplitudes at each of these components according to the desired entangling operation. Since all other modulations can be decomposed in a Fourier series, the spectral representation is equivalent to an analysis in the time domain. Our method is therefore general.
The spectral approach taken here does offer conceptual advantages as it is easier to manipulate analytically <cit.> and offers physical intuition. Our approach is relevant to any form of qubit encoding; e.g. ground state, optical or metastable qubits <cit.>, qubit drive; e.g. Raman, optical or laser-free, and architecture; e.g. global beam <cit.> or individually controlled ions.
To generate a high-fidelity entangling gate, a qubit state which is initially unentangled with the motional degrees of freedom must remain so after the gate is performed. This requirement turns out to be linear in the drive amplitudes, and is relatively easy to satisfy. Additional linear constraints may be added in order to make the entanglement operation robust to various sources of error and noise (see Appendix A). In favor of a simple presentation, in what follows we assume that our degrees of freedom are written in a form that by-construction satisfies these constraints (see Appendix B).
Since the effective qubit-qubit interaction is quadratic in the driving field's amplitude <cit.>, generating a desired entangling operation reduces to the NP-hard quadratic optimization problem,
argmin |r| such that r^TA_nr=φ_n ∀ n=1,..,𝒩.
with |r| the norm of r∈ℝ^ℳ, a vector representing ℳ amplitudes, satisfying 𝒩 quadratic constraints, with {A_n∈ℝ^ℳ×ℳ}_n=1^𝒩 a set of real symmetric matrices, determined by the system parameters and {φ_n}_n=1^𝒩 a set of phases which encode the desired entangling operation. Simply put, we are seeking for the lowest power realization of the entangling operation.
The LSF method is general, such that the choice of system architecture determines the specific interpretation of r and the A_n's. For example, we might use the setup considered in Ref. <cit.>, which generates entangling gates for N ions using a global beam. Here however we will focus on a more general setup, namely we consider N ions which are individually addressed by N independent driving fields, each having its own spectral content. We assume, without loss of generality, that all these spectra contain the same tones yet differ by the amplitudes of these tones (which could be null). This architecture enables qubit-qubit interactions which are mediated through the normal modes of motion and implements programmable XX entangling gates, i.e unitary operators of the form, U=expi∑_n,mφ_n,mσ^n_xσ^m_x, with σ^n_x the Pauli-x operator acting on the nth qubit and φ_n,m the 'target' matrix, completely controllable. Accordingly this yields 𝒩=𝒪(N^2) quadratic constraints in direct correspondence to the target matrix, and r∈ℝ^N× M, with M∝ N, describing the distinct amplitudes of M frequency pairs independently driving each of the N ions.
We remark that the relevant time-scale for the gate-time is given by Δν_<, defined as the smallest difference between adjacent motional mode frequencies, which are used for the entanglement operation. Indeed in the adiabatic limit, i.e. for a gate time, T, such that Δν_< T≫1, the set of coupling matrices, {A_n}_n=1^𝒩 become diagonal, such that satisfying the quadratic constraints is trivial (see Appendix C). However due to the crowding of the mode frequencies in large ion crystals this results in impractically slow gates. Indeed as 2πΔν_< T→1 the tones of the driving field strongly interact with many modes and the coupling matrices are in general dense, making the optimization problem non-trivial.
Using LSF, the NP-hard problem becomes independent of the desired gate, i.e. independent of {φ_n}_n=1^𝒩. We first explain this intuitively. We assume a non-trivial and normalized 'zero-phase solution' satisfying the constraints in Eq. (<ref>) with the 'zero' target φ_n=0 for all n=1,...,𝒩, and denote it by z, with |z|=1. This solution can be scaled by any real number, i.e. λz still satisfies the zero target. For a large enough λ a small deviation, d, from the zero-phase solution, i.e. r=λz+d, generates arbitrarily large entanglement phases. Thus we linearize the quadratic constraints in the vicinity of λz and solve a linear equation for d that satisfies the constraints for a general target, φ. This allows us to 'convert' the zero-phase solution to a 'full' solution of any desired target. Crucially, the linear equation depends on φ but the linearization does not, allowing for a quick conversion from the zero-phase solution to any solution of a general target. Accordingly, the ansatz solution satisfies the quadratic constraints of the full problem, but is still not necessarily optimal, in terms of its amplitude. Thus further optimization is performed by an iterative gradient descent, obtained by linearizing the quadratic equations.
Figure <ref> shows this intuitive picture with a single quadratic constraint, i.e. 𝒩=1, and two tones, i.e. ℳ=2. Specifically we use A_1=diag(1,-2) and φ_1=2, such that the constraints are satisfied along a hyperbola (dashed black) in terms of the two amplitudes, r_1 (horizontal axis) and r_2 (vertical axis). Zero-phase solutions extend from the origin to arbitrarily large amplitudes (dashed gray). We use a large zero-phase solution, λz (blue arrow), such that a small deviation from it, d (purple arrow), explores various values of φ_1. Indeed a converted solution, generating the desired target, φ_1=2, is found close by (green arrow). By linearization of the hyperbloid we perform gradient descent to gradually locate the locally ideal solution (black).
Specifically, we assume λ is large, defined precisely below, and set r, defined above, in the constraints of Eq. (<ref>). We obtain,
φ_n=2λz^TA_nd+𝒪ϵ,
with the assumption |d^TA_nd|≤ϵ, and ϵ the desired operation infidelity. The expression in Eq. (<ref>) defines an easy linear equation, φ=Md, with M_n,k=2(z^TA_n)_k, n=1,...,𝒩, and k=1,...,ℳ. It is solved by,
r^convert=λz+λ^-1M_pinv^-1φ,
with M_pinv^-1 the pseudo-inverse of M. We set λ such that our linearization is consistent, i.e.,
|(1/λM_pinv^-1φ)^TA_j(1/λM_pinv^-1φ)|≤ϵ.
The approximate solution in Eq. (<ref>) satisfies the quadratic constraint, up to order ϵ, but is still not optimal in terms of its magnitude. We iteratively improve it by a series of linear gradient descent steps that act to better satisfy the constraints and reduce the magnitude of the solution. This is done by defining the iteration, r^n=r^optimal-d^n+1, with r^optimal the unknown local solution of the optimization problem and r^0= r^convert. At each step we calculate the constraint error, Δφ^n+1_j=φ_j-r^n^T A_j r^n. We then calculate the next correction by using,
Δφ_j^n+1=2r^n^T A_j d^n+1+𝒪ϵ.
As above the expression in Eq. (<ref>) defines a linear relation which can be inverted and solved for the correction, d^n+1. This will generate a solution which better satisfies the quadratic constraints. In order to also minimize its magnitude we use an additional linear condition, r^(n)·d^(n+1)=-δ|r^(n)|, i.e. the correction d^n+1 acts to reduce |r^n| by a small numerical step, δ. All in all the linear iteration takes the form,
[ Δφ^(n); -δ|r^(n)| ]=[ M^(n); r^(n) ]d^(n+1),
with M^(n)_j,k=2r^n^T A_j_k. Finally we set r^n+1=r^n+d^n+1.
These results imply a recipe for efficiently generating solutions of Eq. (<ref>). Namely, we aggregate many distinct zero-phase solutions (see Appendix D). Then, given a desired target gate, represented by φ, we obtain a solution, r, by using Eq. (<ref>) and the linear iteration in Eq. (<ref>). This process can be done rapidly and in parallel for all of the aggregated zero-phase solutions, out of which the best performing solution is chosen.
Lastly, we note that aggregation of zero-phase solutions may be performed under the assumption of a global drive, i.e. that the spectrum of all of the ions is identical. This recovers the setup of Ref. <cit.> in which only N quadratic constraints, associated with the phase accumulated by the N modes of motion, are required to vanish. Nevertheless, these zero-phase solutions can then be readily converted to full solutions of independently driven ions, avoiding the need to ever solve a dimension N^2 quadratically constrained problem. Remarkably, the performance of our solutions that are converted from zero-phase solutions of an N-dimensional quadratic ansatz have a similar fidelity and power, compared to solutions that are converted from zero-phase solutions of an N^2-dimensional quadratic problem, representing the full degrees of freedom of the system (see Appendix G).
§ ENTANGLEMENT SCALABILITY IN LARGE ION CRYSTALS
We use LSF in order to investigate performance and scalability of entanglement operations in large ion crystals. To this end we consider many trapped ions systems which vary in number of ions, operations time, drive spectra, etc. For each such system we aggregate approximately 150 zero-phase solutions and convert them to full solutions of many entanglement targets, φ_n,m.
We first observe that we find many zero-phase solutions for systems for which T>T_min=2πΔν_<^-1 with Δν_<^-1 the smallest frequency difference of adjacent normal modes of motion. For ions coupled via transverse modes this yields T_min∝ N^2. The N^2 dependence originates from the divergence of density of states of phonon modes at the part of the motional spectrum in which ν∝ k^2. This yields one factor of N due to the density of states, and another factor of N due to the crystal length.
Indeed, Fig. <ref> shows solutions for various number of ions (color) and operation time (horizontal), scaled by T_min. For times T<T_min we either do not find zero-phase solutions or find solutions with a seemingly diverging power and high infidelity of the converted full solutions. We note that this limit is determined by the ion crystal spectrum and is agnostic to the target unitary, i.e. entangling ions at the two edges of the crystal can be performed at the same minimal gate time as that of neighbouring ions. This stems from the fact that the modes of motion of the ion crystal are global, i.e. involve all ions in the crystal, and that in the fast-gate limit all of the modes are excited. The minimal gate time, of high-fidelity realizations, is therefore given by the time it takes the slowest sound mode to traverse the crystal.
We benchmark the solutions produced by LSF using a small scale, N=4, simulation. The simulation takes into account the evolution of the ions and the phonon modes under the model Hamiltonian (see Appendix A), as well as next-order corrections, such as off-resonance coupling to the qubit carrier transition and higher-order Lamb-Dicke terms. All of the simulated gates exhibit a high operation fidelity, matching the performance predicted by LSF (see Appendix E).
Next we consider the total Rabi frequency required to drive the entanglement gate, which we quantify as |r|≡√(∑_n,mr_n_m^2). Since the gate design stems from an NP-hard problem, one would expect that predicting the required total Rabi frequency, before solving the optimization problem, to be challenging. Nevertheless such a prediction is useful as it can be used for system design and as an a-priori stopping criteria for the gradient descent process.
We address this challenge by the following analysis. We first recall the expression for the Rabi frequency required to drive a MS gate, Ω_MS=√(|φ_MS|)/√(2π)η T, with φ_MS the entanglement phase, and η∝ν^-1 the Lamb-Dicke parameter corresponding to the mode of motion at frequency ν. This expression is valid in the adiabatic limit, i.e. for T≫ν^-1,Δν_<^-1. We then generalize Ω_MS to a system in which the target φ_n,m is native, i.e. it can be implemented with a global driving field. This is achieved in the case where that the normal-modes of motion of the ion crystal are the eigenvectors of the matrix φ_n,m <cit.>, and each mode accumulates a phase that is the corresponding eigenvalue. We incorporate this change by replacing φ_MS↦∑_j=1^N|φ_j| , with {φ_j} the eigenvalues of φ_n,m. This sum is known as the 'nuclear norm' of the matrix φ_n,m. Furthermore we replace η with the average η over all modes of motion. When we go beyond native targets, we replace all the elements of φ_n,m with their absolute value. Lastly, for independently driven ions we expect the power to scale linearly with N. Thus, we conjecture an estimate, Ω_nuc,
Ω_nuc=k√(N nuc|φ_n,m|)/√(2π)⟨η⟩ T,
with nuc|·| the nuclear norm of a matrix with its elements taken in absolute value, and k a constant which depends on the choice of implemented linear constraints (e.g. additional linear constraints ensuring gate robustness).
We benchmark this estimate on our set of solutions, and find that this naive estimate gives surprisingly accurate prediction of |r|, with the proportionality constant in Eq. (<ref>) fitted to k=4. Indeed, Fig. <ref> shows various entanglement targets (points), on a N=49 ion crystal, which are separated to different conceptual groups: randomly generated patterns (blue), parallel pairwise interactions (orange), all-to-all interactions of subsets of the crystal (red), arrangements of subsets of the ions in a grid, to form a cluster state (purple) and the entanglement operation required for a stabilizer measurement of a surface code, on subsets of the ions (green).
All of these solutions are based on the same collection of zero-phase solutions, and implement the gate time at T≈2T_min≈12.9ms. For each of these solutions we show the average (point) total Rabi frequency obtained by the conversion of the many zero-phase solutions. We compare our nuclear-norm conjecture in Eq. (<ref>) (horizontal) with LSF's result (vertical), showing that all the solutions collapse on a line. Our conjecture predicts this collapse to occur on a line of unity slope (black dashed). However we observe that the actual results slightly deviate. We correct for this deviation by changing the square root of the nuclear norm, in Eq. (<ref>), to an arbitrary exponent, which is numerically fitted to 0.551 (gray dashed), and matches well with the data.
Remarkably, the data collapse shown in Fig. <ref> implies that the details of the mode structure and frequency are largely irrelevant to the required Rabi frequency. The analogy to a globally driven system provides an intuitive explanation: Driving the ions independently is equivalent to an effective 'reshaping' of the participation of each ion in each of the normal-modes, such that the resulting reshaped mode structure fits the required operation better. Here indeed, this reshaping causes the nuclear norm conjecture to match closely with the full computation, with a small overhead, that comes about as a modification of the exponent of the nuclear norm. We emphasize that while this picture is intuitive, it could not have been verified without the ability to compute optimal, multi-mode, entanglement gates on large ion crystals, afforded by LSF.
We remark that the conjectured estimation based on the nuclear norm assumes that the modes of motion of the ion crystal are global, i.e. that in general all ions are coupled to each other. Indeed in a pathological case in which all ions oscillate independently from one other, ion-ion entanglement is impossible, yet the nuclear norm estimate will not diverge. Furthermore, we note that other known matrix norms operating on φ_n,m, do not generate the data-collapse shown in Fig. <ref> above.
§ SURFACE CODE STABILIZER OPERATION
We demonstrate our method and highlight certain aspects of it via an example. Specifically we outline the entanglement gate required for stabilizer measurements in surface codes. Here we consider a 49 ions crystal, entangling 33 ions using a single pulse. Figure <ref> shows the formation of the stabilizer (left), namely by setting appropriate non-zero coupling phases (black arrows) we map the ion crystal (top) to a 7×7 square grid (bottom), and form nine plaquettes (yellow) which can be used to evaluate the X-parity of the plaquette vertices. In this straightforward mapping some ions remain uncoupled (orange) and can be used in other subsequent operations, some form edge of plaquettes (blue) and some are designated as ancilla qubits (dark blue).
It was shown in Ref. <cit.> that a surface code stabilizer measurement can be implemented with a single multi-qubit MS gate. Note, however, that our coupling map between ions involved in a stabilizer measurement is not all-to-all; rather it takes the form of a 'cross'. This is in fact more efficient as it requires fewer non-zero entanglement phases. Indeed the nuclear norm of the cross coupling map is two times lower than that of the all-to-all coupling.
Here we assume an equally spaced crystal of ^40Ca^+ ions with an inter-ion distance of 5μm. Qubits are mapped on the ground state Zeeman 5S_1/2 manifold and are driven with a 400nm laser field using a Raman transition, which couples the ions using transverse modes of motion, at frequencies of 3 MHz to 3.5 MHz. We choose an operation time T≈2 T_min≈12.9ms. We use our method as prescribed above in order to generate the required entanglement gate. In order to effectively benchmark our method we aggregate 150 zero-phase solutions which are all converted to solutions of the full optimization problem and analyzed. Figure <ref> (right) shows the expected infidelity (horizontal) and required total Rabi frequency (vertical) to drive these solutions, normalized to the frequency of the first mode of motion, at 3 MHz. All solutions exhibit a low infidelity, defined as,
I=∑_n,mΔφ_n,m^2=∑_n,mφ_n,m^ideal-φ_n,m^actual^2.
This definition is an operational distance measure between the ideal entanglement phases, φ_n,m^ideal and the phases achieved by our method, φ_n,m^actual. At the limit of small phase differences this definition converges to the actual operation's infidelity <cit.>.
For the sake of comparison, in this setup a conventional MS gate with the same gate time and coupled to the first motional mode, ν_1, requires a total amplitude of approximately 0.108ν_1 and will have a low gate fidelity, due to operating outside of its adiabatic regime. In contrast, all of our solutions exhibit a similar total amplitude, yet feature a high fidelity, with few outliers. This implies that it is not necessary to aggregate many zero-phase solutions, as they are in general similar in performance. We highlight a low-power solution (green star) which is further analyzed below.
We consider the spectrum given by the highlighted solution (green star in Fig. <ref>). Some of the ions are decoupled from the stabilizer measurement (orange in Fig. <ref>) and are not required to be illuminated or driven at all by a spin-dependent force. Indeed, Fig. <ref> (left) shows the total power that is driving each ion, showing that the decoupled ions are not driven. We emphasize that our optimization was not constrained to provide such a solution, i.e. the conversion of the zero-phase solution and the subsequent gradient descent have automatically converged to shutting off the illumination of decoupled ions. Furthermore, we note that ancilla ions, i.e. ions 8, 10, 12, 22, 24, 26, etc. (dark blue in Fig. <ref>), which have more couplings to ions in the crystal, are accordingly driven with a stronger field.
Next we consider the average spectrum driving the ions, shown in Fig. <ref> (right). The coupling pattern required for the surface code, when mapped to the linear crystal, requires significant variation between adjacent ions (e.g some links are nearest neighbours and some are long range), which are more efficiently formed with high wave-number modes of motion. Indeed the drive tones (blue) are clearly focused around these modes (olive), which for transverse normal-modes, reside at the low-frequency end of the spectrum. These modes also have a slightly larger Lamb-Dicke parameter, which quantifies the coupling between the drive and spin-dependent displacement, thus they better utilize the drive power, and practically a lower heating rate (although not considered in this analysis), resulting in a high-fidelity realization. Figure <ref> (right) also shows that we only make use of driving tones which are close, compared to T^-1, to motional modes. We have seen that these tones are the main contributors to the gate solution, and thus enable a reduction in the number of degrees of freedom used in the optimization.
Lastly we consider the ion-displacement, X_n, induced by the drive implemented on the ions. The ion-displacement is spin-dependent, i.e. its direction and magnitude depend on spin-projections along the Pauli-x axis. Since the qubit ground state is composed of an equal superposition of all eigenstates along the Pauli-x axis then, by symmetry, the mean displacement vanishes. However the time-dependent variance of the spin displacement, is non-vanishing, and is given by (see Appendix F),
X_n^2_t=∑_j=1^N O_j^n^2x_j^2_t,
with x_j^2_t the time-dependent variance of the displacement of mode j and O_j^n the normalized participation of ion n in mode j (see Appendix F).
Figure <ref> (left) shows √(X_n^2_t), averaged over ions, with ions which are used in the stabilizer operation (blue) and ions which are uncoupled (orange), as a function of time. Since the modes of motion couple, in general, all ions in the crystal, then ions which are not illuminated still exhibit motion, however this motion is reduced compared to the ions which are illuminated. This highlights an apparent advantage of independently addressed ions, namely the spectra which drives each ion is effectively used to tune the participation of each ion in each mode of motion, generating an optimal realization of the operation. Furthermore we consider the variance, averaged over time (right), which reveals that uncoupled ions (orange) in general have a smaller displacement. Furthermore, the inversion-symmetry character of the modes of motion is apparent and manifests as a symmetry in the deviation, i.e. the motions of ion n and N-n are identical. This implies that an efficient mapping from the linear crystal to the implemented models can be used in order to minimize unnecessary motion.
In summary, we have introduced a method, large-scale fast, which helps mitigate some of the NP-hardness of designing large-scale entanglement gates for trapped ions qubits. Our method requires a few initial special solutions, solving N, and not N^2, quadratic constraints, which are then readily converted to entanglement gates of arbitrary programmable targets. This allows us to construct specific interactions, such as parallel stabilizer measurements for quantum error correction surface codes. Furthermore, we use our method in order to investigate various aspect of multi-qubit entanglement gates of large ion crystals. Our method delineates a path towards trapped ion crystals of 100s of ions and offers a resolution to the gate-design problem.
This work was supported by the Israel Science Foundation and the Israel Science Foundation Quantum Science and Technology (Grants 2074/19, 1376/19 and 3457/21).
§ APPENDIX A: MODEL DERIVATION
We detail the derivation of the model used to formulate the optimization problem in Eq. (<ref>). In our analysis below we rely on and generalize the derivations provided in <cit.>.
We consider a general spectrum of tones which drive independently N trapped ions.
Without loss of generality, the driving applied to different ions may be regarded as composed of the same tones, differing only by the tone amplitudes. These tones are placed symmetrically around the single qubit transition frequency, ω_0. The field driving the nth ion is,
w_nt= 2cosω_0 t+ϕ_0
· ∑_m=1^M[r_n^c_mcosω_m t +r_n^s_m sinω_m t].
That is, each spectrum contains 2M components at frequencies {ω_0±ω_m}_m=1^M. The amplitude of the cosine (sine) mth tone pair illuminating the nth ions is r_n^c_m (r_n^s_m) and the phase of each tone in the pair is ϕ_0± 0 (ϕ_0∓π/2), such that all N× M tone pairs have the same average phase, which generates a correlated rotation around the Pauli sinϕ_0σ_x+cosϕ_0σ_y axis. For simplicity we assume that ϕ_0=π/2 such that the relevant Pauli operator is σ_x. The motional mode phase space trajectories generated by the cosine and sine quadrature are relatively rotated by π/2.
All in all this is captured by the lab-frame Hamiltonian (ħ=1),
H =H_0+V
H_0 =∑_j=1^N [ν_j a^†_j a_j+1/2+ω_0/2σ_z^j]
V =2Ω∑_n=1^N σ_x^ncosk x_n-ω_0 t·
∑_m=1^M[r_n^c_m cosω_m t+r_n^s_m sinω_m t],
with a_j the annihilation operator associated with jth normal mode of motion, at frequency ν_j, σ_i^n the i-Pauli operator acting on the nth ion, k the driving field's wave number, x_n the position operator of the nth ion and Ω a characteristic Rabi frequency. Clearly the last parenthesis in Eq. (<ref>) can be written as a single cosine term with a phase, however this form preserves a crucial aspect of our formulations, i.e. the exclusive linear and quadratic dependence on the amplitudes. Furthermore, here we have assumed that we are coupled to motional modes along one principle direction of the trap, such that the summation on modes is up to N (and not 3N), this assumption can be easily relaxed <cit.>.
By using a conventional set of approximations, namely the rotating wave approximation in Ω/ω_0, the Lamb-Dicke approximation, and by neglecting carrier-coupling terms the lab-frame Hamiltonian is converted to the interaction Hamiltonian,
H_I =Ω∑_n=1^N σ_x^n∑_j=1^N η_j O_j^na_j^† e^iν_j t+H.c·
∑_m=1^Mr_n^c_m cosω_m t+r_n^s_m sinω_m t,
with O_j^n the normalized participation of the nth ion in the jth mode of motion, such that ∑_j=1^N O_n^jO_m^j=δ_n,m, and η_j the singe-ion Lamb-Dicke parameter associated with the jth mode of motion (it is sometimes conventional to define η_j^n=η_j O_j^n).
The Hamiltonian in Eq. (<ref>) can be rearranged in the form, H_I=∑_n=1^N∑_j=1^N ζ_j^ntσ_x^na_j+H.c, with ζt a time dependent function read-off directly from Eq. (<ref>). That is, it has only σ_x spin operators and is linear in the mode raising and lowering operators, it is therefore analytically solvable. Specifically its Magnus expansion vanishes after the second order. The resulting unitary evolution operator is the well-known combination of spin-dependent displacement of the motional modes and spin-exclusive correlated rotation,
U=∏_j=1^N D_j∑_n=1^Nσ_x^nα_j^n∏_n,m=1^N e^iφ_n,mσ_x^nσ_x^m,
with D_jα=expα a_j^†-α^∗ a_j a displacement operator which translates the jth mode by α_j^n, with,
α_j^n = -iη_jO_j^(n)Ω
· ∑_m=1^M∫ _0^tdt^' e^iν_jt^'[(r_n^c)_mcos(ω_mt^')-(r_n^s)_msin(ω_mt^')]
and entanglement phases,
φ_n,m =r_n^TA_n,mr_m
A_n,m =-Ω^2∑_j=1^Nη_j^2O_j^(n)O_j^(m)[ A_j^cos,cos A_j^cos,sin; A_j^sin,cos A_j^sin,sin ]
(A_j^f,g)_m,l =-∫_0^tdt_1∫_0^t_1dt_2sinv_j[t_1-t_2]
· [f(ω_mt_1)g(ω_lt_2)+f(ω_mt_2)g(ω_lt_1)]
with r_n=r_n^c, r_n^s and t=T the entanglement operation time.
Using the definitions in Eqs. (<ref>) and (<ref>) we observe that mode displacement is linear in the field amplitudes while the two-qubit rotation phase is quadratic in them.While the linear constraints resulting from the former are discussed in the next Appendix, the quadratic constraints resulting from the latter lead to the optimization problem in Eq. (<ref>).
§ APPENDIX B: GATE HARMONICS AND LINEAR CONSTRAINTS
We discuss convenient choices for the tones {ω_m}_m=1^M and show how to write the degrees of freedom in a from that by-construction satisfies the linear constraints.
Driving the entanglement operation with a multi-tone representation of the field carries a lot of physical intuition. Specifically, it is beneficial to choose the tones ω_m in the vicinity of the mode frequency band {ν_j}_j=1^N, since the coupling between tones and modes scales inversely with their frequency difference. Thus in our demonstrations above we choose ω_1 slightly below the smallest tone frequency and ω_M (assuming they are ordered) slightly above the largest frequency, defined precisely below.
We expect the field amplitude to vanish before t=0 and after t=T, therefore a convenient choice of tones is the harmonic basis, i.e. ω_m=2π/Th_m, with h_m∈ℕ the tone number. This also makes the method's speed limit apparent - in order efficiently differentiate between the effect of adjacent modes a tone must be placed between them, thus the characteristic minimal gate time scales as Δν_<^-1 as shown in the main text. We note that the harmonic choice also simplifies the evaluation of the integrals in Eqs. (<ref>) and (<ref>).
In the main text we state that the optimization problem needs to satisfy both linear and quadratic constraints, however the former may be solved by-construction. Indeed, in order to ensure that at the operation time, t=T, the state of the motional mode is the same as in t=0 we therefore require that the displacement operators, D(α_j^(n)(T)), in Eq. (<ref>) are unit operators. Crucially, this ensures that an initial state in which the qubit and motion degrees of freedom are decoupled, remain decoupled after the operation. This is satisfied by requiring that,
α_j^nT=0 ∀ j,n=1,...,N.
Focusing on the linear constraints, we note that while Eq. (<ref>) naively contains N^2 constraints, they can all be solved by restricting r_n∈L for all n=1,..,N, with,
L_j,m=∫ _0^tdtcos(ν_jt)cos(ω_mt) m=1,...,M
∫ _0^tdtsin(ν_jt)sin(ω_mt) m=M+1,...,2M
,
and j=1,...,N. This restriction removes the linear requirement from the optimization problem in Eq. (<ref>). Furthermore, the matrices A_n,m and A_j can be easily transformed to the kernel space of L such that their dimension is reduced and their evaluation is faster. The kernel space can be either found exactly or approximately under some infidelity tolerance <cit.>.
The entanglement operation can be endowed with additional properties that ensure its robustness to various sources of errors and noise such as unwanted coupling to the carrier, and other transitions, pulse timing errors, phonon-mode heating, phonon frequency drifts etc. These may be added as additional rows of the matrix L <cit.>.
Out of these properties, decoupling of the carrier transition is necessary in order to justify the resulting interaction in Eq. (<ref>). A convenient way of doing so is by letting the driving fields rise and fall continuously at t=0 and t=T, respectively, with the choice r_n^c=0 for all n=1,..,N, which further simplifies all of the expressions above. Here we set r_n↦r_n^c=0 and for simplicity identify r_n with r_n^s and A_j with A_j^sin,sin. We remark that with this choice the drive profile is not time-symmetric, while there are known advantages for using a time-symmetric drive <cit.>.
§ APPENDIX C: TRIVIAL SOLUTION IN THE ADIABATIC LIMIT
In the slow gate limit, Δν_< T≫1 satisfying the quadratic constraints is trivial and any bipartite qubit-qubit coupling can be achieved. Here we prove this directly by constructing such a solution. In this limit it is helpful to choose spectral tones containing frequencies of the form ω_s↦ω_j,s=ν_j+2π/Ts with s∈ℕ. We have doubled the index s to j,s for convenience. Since T is large we can safely assume that the tone ω_j,s couples exclusively to mode j and satisfies the linear constraints by construction. Furthermore, with this choice the matrices A_j become diagonal, i.e each of the tones coupled to mode j contributes to the entanglement phase independently, and scales as s^-1. In other words A_j_j^',s^',j^'',s^''∝δ_j,j^'δ_j,j^''δ_s^',s^''s^'^-1 <cit.>.
We are required to generate NN-1/2 bipartite entanglement phases φ_n,m, with 1≤ n<m≤ N. We do so by designating a unique tone to each phase. Since these tones do not interfere (whether they are driving distinct or the same mode of motion) then we simply need to scale the drive amplitudes accordingly. Specifically, to satisfy the constraint on φ_n,m we choose an arbitrary mode which couples to both ions. Without loss of generality this can always be the center of mass mode, j_COM. We drive both ions with the same tone ω_j,s setting j=j_COM and s=N· n+m, and set its amplitude driving both ions to be √(N· n+mφ_n,m/O_j_COM^nO_j_COM^m).
§ APPENDIX D: ZERO PHASE SOLUTION AGGREGATION
Our method relies on the conversion of preexisting zero phase solutions to full solutions. Here we give a general recipe to how these are generated. Obtaining a solution to the quadratic constraints in Eq. (<ref>) is a NP-hard problem, therefore the constraints are met using a numerical search. Since we are interested in zero phase solutions the norm of z is irrelevant, so we normalize it to 1.
We randomize a unit length vector, z^0, and assume it is close to some zero phase solution. We use the linearization described in Eqs. (<ref>) and (<ref>) in order to iteratively improve the solution. We note the the norm condition in Eq. (<ref>) is replaced with d^n+1·z^n=0, and in any case z^n+1 is normalized to unit length after each iteration.
We halt the iterations after they have converged or fulfilled the infidelity required threshold. The solution is accepted and added to a zero phase pool of solutions if it fulfills the infidelity threshold and has a low overlap with already existing solutions.
§ APPENDIX E: SMALL SCALE SIMULATION
We consider a numerical time-step simulation of our method for a small number of ions. The simulation serves two purposes; the first is verification of our results. In particular, it validates the Magnus expansion that leads to Eq. (<ref>), and the corresponding solutions of Eq. (<ref>). Second, it allows us to consider the effect of small terms that we have neglected in the Hamiltonian; such as those due to carrier coupling and higher orders in the Lamb-Dicke expansion.
The simulation is limited to a small number of ions due to the computational complexity of working with an exponentially growing Hilbert space. Considering N ions and a phonon cutoff for each motional mode, N_cutoff, the total Hilbert space dimension is 2^N· N_cutoff^N. Fortunately, it is possible to at least circumvent the exponential scaling in N_cutoff; we use the fact that the system Hamiltonian, in Eq. (<ref>), is given by a sum of terms that each act on a single motional mode, i.e. H_I = ∑_j=1^N H_j. Since the different modes of motion commute, [H_j,H_j^'] = 0, it is sufficient to consider one mode at a time, and sequentially evolve the system with each Hamiltonian H_j. For each step in the calculation a Hilbert dimension of only 2^N· N_cutoff is sufficient.
In particular, we use the algorithm proposed in <cit.>: for k=1,...,N, we use the initial state ρ(t=0)=ρ_qubit(t=0) ⊗ρ_motion,k(t=0) and numerically evolve with Hamiltonian H_k from time t=0 to t=T. We then trace out the motional degree of freedom and use ρ_qubit(T) as the new initial qubit state for H_k+1. At the end of this procedure, we calculate the fidelity of the simulated gate as ⟨ψ_ideal|ρ_qubit(T)|ψ_ideal⟩.
We use this to simulate the solutions obtained by our method above, on a four ion system. We observe a very good agreement between the fidelity obtained through the simulation and that predicted by Eq. (<ref>). As an example, we highlight a gate which implements the four-ion “cluster state.”
Figure <ref> (left) shows the populations of each spin basis state over the duration of the gate, as determined by the simulation; the target final state is |ψ⟩ = 1/2(|0000⟩ - |0110⟩ - |1001⟩ - |1111⟩). Also shown in Figure <ref> (right) is the average phonon occupation number for each of the four motional modes.
The phonon cutoff we use is N_cutoff = 14, which is justified by the fact that phonon states above n = 10 are hardly populated at all, i.e. ∑_n > 10⟨n|ρ_motion (t) |n⟩ < 10^-5.
For this gate the fidelity returned by the simulation is 99.987%, nearly identical to the value predicted by Eq. (<ref>). This is the ideal case, which is also solved analytically, however the simulation also allows us to consider potential sources of error. For example we can add a carrier coupling term to the Hamiltonian:
H_c.c = 2∑_n=1^N ∑_m=1^M σ_y^(n)cos(ω_m t) r_n_m
We note that due to the σ_y operators, the addition of this term breaks our assumption that the Hamiltonian can be decomposed to a sum of commuting terms. Thus we heuristically account for carrier coupling by considering H_j → H_j + 1/N H_c.c. Accounting for carrier coupling, the simulation gives essentially the same fidelity. We note that carrier coupling generally has a greater impact on the gate fidelity when working with smaller motional frequencies. However, if necessary, robustness against carrier coupling could always be further improved by imposing additional linear constraints.
Furthermore, we study the effect of higher order Lamb-Dicke terms in the Hamiltonian. Specifically we consider terms that modify the first-order sideband interaction (not spin squeezing terms). We do so by modifying the phonon creation and annihilation operators in the Hamiltonian to include O(η^2) terms in the Debye-Waller factor <cit.>. With these terms included, the simulation fidelity is essentially unchanged; we note that these terms are expected to have a negligible contribution, as η^2 ≈ 10^-4.
Figure <ref> shows fidelities determined by the simulation of multiple four-ion gates, corresponding to the N=4 gates in Fig. <ref>, both for the ideal case as well considering the two error sources mentioned above, compared with our method's expected fidelity as defined in Eq. (<ref>).
§ APPENDIX F: QUANTIFYING MODE AND ION POSITION VARIANCE
The unitary evolution operators allow for calculating the position and momentum expectation values of the motional modes as well as individual ions in the ion crystal. Here we provide the expressions used for the analysis of ion and mode motion, shown in Fig. <ref>, above.
For simplicity, we assume that the initial state of all modes of motion is the ground state, n_j=0. Moreover, we assume the initial spin state is in the Pauli-x basis, |s_1,...,s_N⟩, i.e. with s_n=±1 signifying the ±1 single-qubit eigenstates of σ_x. Due to the unitary evolution operator in Eq. (<ref>), the expected value of displacement of the jth mode in this state, is given by
x_j_t=∑_n=1^N 2 s_n [α_j^nt],
with α_j^n define in Eq. (<ref>) above. The initial qubit computational ground state, is a state in which all qubits are set to σ_z=1, and is an equal superposition of all |s_1,...,s_N⟩ states. Thus the expectation value of the qubit ground state will vanish, due to the summation of s_n=±1 for all n.
Therefore a better quantifier for displacement of the the phonon mode, during gate operation, is the mode variance, given by,
x_j^2_t=1/2^N∑_s∈{+1,-1}^N∑_n=1^N 2 s_n [α_j^nt]^2,
where the first sum is on all possible |s_1,...,s_N⟩ states. This is simplified to,
x_j^2_t=4∑_n=1^N [α_j^nt]^2,
The expected position of the nth ion, X_n, is given by, X_n=∑_j=1^N O_j^nx_j. Since in the spin ground state all mode expectation values vanish, then so does the expected ion position. Here as well we instead use the position variance, yielding,
X_n^2=∑_j=1^N O_j^n^2x_j^2_t.
The expression in Eq. (<ref>) is used in order to generate Fig. <ref> of the main text.
§ APPENDIX G: COMPARISON OF ZERO PHASE SOLUTIONS USING GLOBAL ADDRESSING AND MULTI-ADDRESSING ANSATZ
As described in the main text, we aggregate zero-phase solutions under the assumption that the ions are driven by a global beam, equally illuminating all ions. This results in only N quadratic constraints, and has solutions, z_g∈ℝ^M. These solutions are then used as zero-phase solutions for independently addressed ions, as z_n=z_g for all n=1,..,N. Since these zero-phase solutions originate from a subspace of the full optimization problem search space, i.e. they have M degrees of freedom, instead of N× M, then they might result in sub-optimal converted solutions.
Here we use a small-size system in order to exemplify that the performance of solutions originating from a global zero-phase solution, is in effect as good as that of solutions based on N× M degrees of freedom. To do so we use a small, nine ion, crystal. We generate 50 zero-phase solution using a global assumption (with 9 quadratic constraints), z_g and 50 zero-phase solutions, assuming independent, multi-addressing of the ions (with 9×8/2=36 quadratic constraints), z. These solutions are then converted to full solutions of various random targets, r_g and r respectively.
All of the converted solutions yield high-fidelity entanglement gates, which satisfy the infidelity criterion, set here intentionally high, to 10^-8. Figure <ref> (left) shows a comparison of the total required Rabi frequency of both types of zero-phase solution ansatze. Each point corresponds to a random target positioned according to its square root absolute nuclear-norm √(nuc|φ_n,m|) and |r| scaled by the frequency of the first mode of motion, ν_1 (vertical). Solutions converted from a global ansatz (blue) require a drive power that is almost equal to those converted from an independent ansatz (orange). We note that a similar behavior is exhibited in terms of fidelity. This indicates that the independently driven zero-phase solutions do not provide superior converted solutions.
We furthermore study correlations between the solutions. We choose the right-most entanglement target, in Fig. <ref>, and for each multi-addressed zero-phase solution, z we find an extended globally addressed zero-phase solution, z̃_g=[ z_g z_g ⋯ z_g ]∈ℝ^NM which is 'closest' to it in terms of overlap in aboslute value, i.e. that maximizes the quantity o_z≡|z̃_g·z|/|z̃_g||z|. We plot these overlaps on Fig. <ref> (right) on the horizontal axes. We compare to the same overlap of the converted solutions (vertical). We observe a low overlap between zero-phase solutions and a high overlap between full solutions, meaning that the while the zero phase solutions of both methods yielded different results, many converted solutions converge to the same locally optimal solution.
|
http://arxiv.org/abs/2307.11744v1 | 20230711073631 | Optimal importance sampling for overdamped Langevin dynamics | [
"M. Chak",
"T. Lelièvre",
"G. Stoltz",
"U. Vaes"
] | stat.ME | [
"stat.ME",
"math.PR",
"35Q82, 35Q93, 60J25, 82M31"
] |
SAR-NeRF: Neural Radiance Fields for Synthetic Aperture Radar Multi-View Representation
Zhengxin Lei, Graduate Student Member, IEEE,
Feng Xu, Senior Member, IEEE,
Jiangtao Wei, Graduate Student Member, IEEE,
Feng Cai, Graduate Student Member, IEEE,
Feng Wang, Member, IEEE,
and Ya-Qiu Jin, Life Fellow, IEEE
This work was supported by the Natural Science Foundation of China under Grant U2130202. (Corresponding author: Feng Xu.).
The authors are with the Key Laboratory for Information Science of Electromagnetic Waves (MoE), Fudan University, Shanghai 200433, China.(e-mail: [email protected])
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Calculating averages with respect to multimodal probability distributions is often necessary in applications.
Markov chain Monte Carlo (MCMC) methods to this end,
which are based on time averages along a realization of a Markov process ergodic with respect to the target probability distribution,
are usually plagued by a large variance due to the metastability of the process.
In this work,
we mathematically analyze an importance sampling approach for MCMC methods that rely on the overdamped Langevin dynamics.
Specifically, we study an estimator based on an ergodic average along a realization of an overdamped Langevin process for a modified potential.
The estimator we consider incorporates a reweighting term in order to rectify the bias that would otherwise be introduced by this modification of the potential.
We obtain an explicit expression in dimension 1 for the biasing potential that minimizes the asymptotic variance of the estimator for a given observable,
and propose a general numerical approach for approximating the optimal potential in the multi-dimensional setting.
We also investigate an alternative approach where,
instead of the asymptotic variance for a given observable,
a weighted average of the asymptotic variances corresponding to a class of observables is minimized.
Finally, we demonstrate the capabilities of the proposed method by means of numerical experiments.
§ INTRODUCTION
§.§ Context
In many applications ranging from Bayesian inference to statistical physics and computational biology,
it is often necessary to calculate expectations with respect to high-dimensional probability distributions of the form
μ = ^-V/Z, Z = ∫_^d^-V,
where ∈{, },
with := / 2π the one-dimensional torus,
and V^d → is a potential energy function (confining if = and periodic if =)
such that ^-V is Lebesgue integrable.
In the context of Bayesian inference,
the distribution μ usually describes likelihoods of the possible values of an unknown parameter given some observed data <cit.>,
while in statistical physics,
the distribution μ assigns probabilities to the possible configurations of a molecular system. In the latter setting,
averages with respect to μ give access to macroscopic properties of the system,
such as the heat capacity or equations of state relating pressure, density and temperature <cit.>.
The first systematic approach to sampling probability distributions originates from the 1950s with the seminal work of <cit.>.
In 1970, Hastings generalized this approach and proposed a sampling method <cit.>
which was later recognized as a particular case of what is now known as a Markov chain Monte Carlo (MCMC) method.
The MCMC approach to sampling is based on the use of a Markov process that admits the target probability distribution as unique invariant measure.
A simple yet widely used Markov process that is ergodic with respect to μ under appropriate conditions on the potential V
is the overdamped Langevin dynamics,
Ỵ_t = -∇ V(Y_t) ṭ + √(2) Ẉ_t,
where W_t denotes a standard d-dimensional Wiener process.
Under appropriate assumptions on the potential V,
the average with respect to μ of an observable f ∈ L^1(μ) can be approximated by a time average along a realization of the solution to this equation:
μ^T(f):=1/T∫_0^T f(Y_t) ṭ∫_^d f μ̣=: μ(f) =: I,
see e.g. <cit.>, <cit.>,
<cit.> and references therein, <cit.> and <cit.>.
In practice, it is necessary to discretize the dynamics (<ref>),
and the resulting discrete-time Markov process is generally ergodic with respect to not μ
but a probability measure μ_Δ t differing from μ at order Δ t^α,
for some exponent α larger than or equal to the weak order of convergence of the scheme.
The bias introduced by the discretization can usually be bounded from above as a function of the time step;
such estimates were first obtained by Talay and Tubaro <cit.> for general SDEs.
They were later made precise for implicit schemes for Langevin and overdamped Langevin dynamics in <cit.>,
and then refined in works such as <cit.>.
Alternatively, it may be possible to consider the numerical scheme as a proposal to be accepted or rejected in a Metropolis–Hastings scheme,
so that the resulting discrete-time process is also ergodic with respect to μ.
This is the Metropolis-adjusted Langevin algorithm (MALA) <cit.>.
In this paper,
we focus on the continuous-time dynamics (<ref>)
and show that modifying the potential V, in combination with importance sampling,
can be used for variance reduction.
Importance sampling is widely used to make the sampling of high dimensional probability measures easier; see for instance <cit.> for a review. The idea of using importance sampling in the context of MCMC methods was already suggested in Hastings' 1970 paper,
see <cit.>.
If (X_t)_t ≥ 0 is a Markov process that is ergodic with respect to a probability measure
μ_U = ^-V - U/U,
U = ∫_^d^-V-U,
where U^d → is a smooth function such that ^-V-U is Lebesgue integrable over ^d,
then μ(f) may be approximated by
μ^T_U(f) = ∫_0^T (f ^U)(X_t) ṭ/∫_0^T(^U)(X_t) ṭ.
Like μ^T(f), this estimator converges to I almost surely in the limit as T →∞,
and it does not require the knowledge of the normalization constants Z and U.
The main objective of this work is to study the properties of μ^T_U(f)
when (X_t)_t ≥ 0 is the solution to the overdamped Langevin dynamics with the potential V+U:
X̣_t = -∇ V(X_t) ṭ -∇ U(X_t) ṭ + √(2) Ẉ_t.
In particular, we study whether it is possible to find a biasing potential U such that the asymptotic variance of μ_T^U(f),
which we define more precisely in <ref>,
is minimized for either a single observable (<ref>) or a class of observables (<ref>). Let us also mention here the recent work <cit.> where optimal importance sampling is performed over a class of distributions.
Before considering the MCMC estimator (<ref>),
it is instructive to present background material on importance sampling in the independent and identically distributed (i.i.d.) setting.
This is the aim of <ref>.
In all the theoretical results presented in this work,
we assume that the following assumptions on V and f are satisfied,
even when this is not explicitly mentioned.
* The potential V is smooth over ^d (in particular (^-V) = ^d).
* The function ^-V is Lebesgue integrable over ^d.
* Any observable f considered (just one in <ref> and set of them in <ref>) is smooth,
integrable with respect to the probability measure μ∝^-V,
and not μ-almost everywhere constant.
§.§ Our contributions
The contributions of this paper are the following:
* In the one-dimensional setting (d=1) either with = or =,
and for a given observable f,
we obtain an explicit expression for the biasing potential U in (<ref>)–(<ref>) that is optimal in terms of asymptotic variance.
We also prove that when =,
the asymptotic variance σ^2_f[U] of μ_U^T(f), viewed as a functional of U,
is convex.
* In the general multi-dimensional setting,
we obtain an expression for the L^2(μ) functional derivative of σ^2_f[U] with respect to U,
and we propose a gradient descent approach for finding a minimizer of σ^2_f[U].
We also prove that any minimizer of σ^2_f[U] is necessarily singular when =.
* We propose a method for minimizing the asymptotic variance over a class of observables.
More precisely, we present an approach for minimizing the average asymptotic variance
when a simple Gaussian probability distribution is placed on the observable.
We demonstrate through theoretical results and numerical experiments that this approach usually leads to a smooth optimizer,
and may thus be more suitable for applications.
* We present examples and numerical experiments
illustrating the properties of the optimal biasing potential and the performance of the method,
both in the one-dimensional case and the multi-dimensional setting.
Plan of the paper.
The remainder of the paper is organized as follows.
We begin in <ref> by presenting background material on importance sampling in the i.i.d. setting.
In <ref>,
we investigate the problem of minimizing the asymptotic variance for a given observable,
first in the one-dimensional case and then in the multi-dimensional setting.
In <ref>,
we generalize the approach to the problem of minimizing the asymptotic variance over a class of observables.
Examples and numerical experiments are presented in <ref>.
<Ref> is reserved for conclusions and perspectives for future works.
This section is followed by four appendices:
<ref> contain proofs of auxiliary results,
<ref> presents a derivation of the second variation of the asymptotic variance,
and <ref> provides a detailed analysis of the numerical scheme employed for approximating the functional derivative of the asymptotic variance.
§ BACKGROUND: THE I.I.D. SETTING
We recall in this section the expression of the optimal importance distribution μ_U in the setting where I
is estimated from i.i.d. samples from μ_U,
as opposed to the ergodic approach in (<ref>).
Specifically,
we consider the estimator
μ_U^N(f) :=
∑_n=1^N (f ^U)(X^n)/∑_n=1^N (^U)(X^n)
= I + ∑_n=1^N((f-I) ^U)(X^n)/∑_n=1^N (^U)(X^n),
where (X^n)_n ∈ are i.i.d. samples from μ_U.
The right-most expression is not useful in practice because the value of I is unknown,
but this expression is convenient for the theoretical analysis.
We first comment briefly on the connection between this estimator and the estimator (<ref>) in <ref>,
then obtain an expression for the asymptotic variance of the estimator (<ref>) in <ref>.
and finally prove bounds on the asymptotic variance in <ref>.
§.§ Connection between the estimators
The estimators (<ref>) and (<ref>) can be viewed as two limiting cases,
corresponding to the limits τ→ 0 and τ→∞ respectively,
of the estimator given in the second row and right-most column of <ref>.
The latter estimator is based not on the full solution to (<ref>)
but on discrete periodic evaluations of it with a period τ.
The analysis presented in this paper can be repeated for the estimators in the middle column of <ref>,
which can be employed when the normalization constants Z and U are known.
In this case, different optimal potentials are obtained.
However, since the normalization constants are usually unknown in high-dimensional settings,
we focus in most of this paper on the self-normalized estimators in the right-most column of <ref>.
§.§ Asymptotic variance
In the whole <ref>,
we suppose that <ref> holds,
in particular that the potential V is smooth,
but we do not assume that the function U is smooth;
we assume only
that U^d →∪{∞} is measurable and such that
the following minimal requirements are satisfied:
* the function ^-V-U is Lebesgue integrable with U > 0;
* the estimator (<ref>) converges to I almost surely, i.e. it holds that
∫_^d (f-I) ^U μ̣_U = 0
⇔ ∫_(μ_U) (f-I) μ̣= 0.
Here
(μ_U) = {x ∈^d : U(x) + V(x) < ∞}
is the support of the measure μ_U,
with the closure.
In this work,
we adopt the usual convention in measure theory that 0 · + ∞ = + ∞· 0 = 0.
* the function (f-I)^U is square-integrable with respect to μ_U,
so that the central limit theorem can be applied.
We denote by 𝒰 the set of functions U that satisfy these conditions.
The set 𝒰 depends on V and f,
but since these are considered to be fixed data of the problem,
we do not explicitly indicate this dependence in the notation.
If U ∈𝒰 and (X^n)_n∈ are i.i.d. samples with law μ_U,
then it holds by (<ref>) and the central limit theorem that
1/√(N)∑_n=1^N((f-I) ^U) (X^n)
𝒩(0, ∫_^d*(f-I) ^U^2 μ̣_U),
where 𝒩(m, C) denotes the Gaussian distribution with mean m and (co)variance C.
On the other hand,
the law of large numbers gives that
1/N∑_n=1^N(^U)(X^n)
∫_^d^Uμ̣_U = 1/Z[U]∫_(μ_U)^-V
= 𝒵[U]/U,
where we introduced
𝒵[U] := ∫_(μ_U)^-V.
Note that 𝒵[U] ≤ Z in general,
with equality if and only if (μ_U) = (μ).
Combining (<ref>) and (<ref>) and using Slutsky's lemma,
we conclude that
√(N)( μ^N_U(f) - I)
(0, s^2_f[U]),
s^2_f[U] := U^2/𝒵[U]^2∫_^d*(f-I) ^U^2 μ̣_U.
The variance s^2_f[U] of the asymptotic normal distribution is the quantity we wish to minimize.
In the next section, we obtain sharp bounds from below on the asymptotic variance s^2_f[U].
§.§ Explicit optimal potential
In order to prepare the proof of the main result of this section,
<ref>,
we first give a preparatory lemma.
We introduce the functional s^2_f𝒰→∪{∞} given by
s^2_f[U] = Z[U]/Z^2∫_^d| f-I |^2 ^U-V.
Given the convention that 0 · + ∞ = + ∞· 0 = 0,
the quantity s^2_f[U] is well defined as an element of ∪{∞}.
Note that s_f^2[U] coincides with the asymptotic variance s^2_f[U] if and only if (μ_U) = ^d.
It holds that
min_U ∈𝒰s^2_f[U] = 1/Z^2( ∫_^df-I μ̣)^2 =: s^*_f.
In addition, the minimum is achieved for
U = U_*^ iid := - logf-I∈𝒰,
with the convention that log(0) = - ∞.
We first prove that s^2_f[U] ≥ s^*_f for all U ∈𝒰.
If s^2_f[U] = ∞,
then this inequality is clear, so we assume from now on that s^2_f[U] < ∞.
Using the Cauchy–Schwarz inequality,
we have
s^2_f[U]
= 1/Z^2∫_^d^-U-V∫_^d| f-I |^2 ^U-V≥1/Z^2(∫_^d| f-I |^-V)^2
= s^*_f.
The statement that the infimum is achieved for U_*^ iid in (<ref>) follows from a substitution in (<ref>).
Note that U_*^ idd indeed belongs to 𝒰.
The function U_*^ iid in (<ref>) is not smooth,
because f - I admits at least one root in ^d.
Although s^2_f[U] does not coincide with s^2_f[U] in general,
<ref> still contains useful information.
Indeed, by regularizing U_*^ iid in (<ref>),
we prove in <ref> that s^*_f is a sharp bound from below on the actual asymptotic variance s^2_f[U]
over a class of “nice” biasing potentials.
Specifically, let us introduce
𝒰_0 = { U ∈𝒰 : (f-Iμ) ⊂(μ_U) }.
Conditions of the type (f-Iμ) ⊂(μ_U) are used in the importance sampling literature;
see, for example, equation (1.1) in <cit.>.
Notice that, if this condition is satisfied,
then (<ref>) is also satisfied,
but the converse is not true;
in other words, 𝒰_0 is a proper subset of 𝒰.
We are now ready to state and prove the main result of this section.
We discuss in <ref> after the proof the existence of an optimal potential achieving the infimum over 𝒰_0.
Here and in the rest of this paper,
the notation C^∞_ c(^d) denotes the set of smooth functions with compact support over ^d.
Recall that <ref> is assumed to hold throughout this paper.
Then,
* It holds that
inf_U ∈𝒰 s^2_f[U] = 0.
* If 0 ∈𝒰_0,
then
inf_U ∈𝒰_0 s^2_f[U] = s^*_f.
* If 0 ∈𝒰_0,
then
inf_U ∈ C^∞_ c(^d) s^2_f[U] = s^*_f.
In view of <ref>, the condition 0 ∈𝒰_0 is equivalent to the condition f ∈ L^2(μ).
We use the former condition in the statement of <ref> in order to underline the similarity with <ref> in the MCMC setting.
We divide the proof into three parts,
corresponding to the three items in the statement.
First item.
The idea here is to construct an importance distribution μ_U concentrated in a small ball containing a point where f = I.
Considering balls centered at such a point is usually not sufficient,
because the condition (<ref>) may not be satisfied.
Take ε > 0 and let g_ε^d → denote the function
g_ε(x) = 1/|B_ε(0)|∫_B_ε(x)(f - I) ^-V,
where B_ε(x) ⊂^d is the open ball of radius ε centered at x
and |B_ε(0)| is the volume of this ball.
By <ref>,
there exists (x_1, x_2) ∈^d ×^d such that f(x_1) > I and f(x_2) < I.
Since f is smooth,
there is ϵ > 0 such that g_ε(x_1) > 0 and g_ε(x_2) < 0 for all ε∈ (0, ϵ].
Therefore, by the intermediate value theorem,
there exists for all ε∈ (0, ϵ] a point x_ε∈^d on the segment joining x_1 and x_2 such that g_ε(x_ε) = 0.
We define
U_ε(x) =
0 if x - x_ε≤ε,
+ ∞ otherwise.
By construction U_ε∈𝒰,
and we calculate from (<ref>) that
s^2_f[U_ε] =
∫_B_ε(x_ε)| f-I |^2 ^-V/∫_B_ε(x_ε)^-V.
By the mean value theorem,
there is z_ε∈ B_ε(x_ε) such that s^2_f[U_ε] = | f(z_ε) -I |^2.
Since g_ε(x_ε) = 0,
the function f-I necessarily has a zero in B_ε(x_ε).
Additionally, since (x_ε)_ε∈ (0, ϵ] is bounded,
the function f-I restricted to ⋃_ε∈ (0, ϵ] B_ε(x_ε) is Lipschitz continuous,
with some constant L.
From this we obtain that s^2_f[U_ε] ≤ (2L ε)^2,
and taking the limit ε→ 0 in this equation,
we deduce the first item.
Second item.
By the Cauchy–Schwarz inequality,
the following lower bound holds for all U ∈𝒰:
s^2_f[U]
≥U^2/𝒵[U]^2( ∫_^df-I^U μ̣_U)^2
= 1/𝒵[U]^2( ∫_(μ_U)f-I^-V)^2.
For U ∈𝒰_0,
the right-most expression in (<ref>) equals
1/𝒵[U]^2( ∫_^df-I^-V)^2 ≥ s^*_f,
where we used that 𝒵[U] ≤ Z.
Therefore, it holds that s^2[U] ≥ s^*_f for all U ∈𝒰_0.
That s^*_f is in fact the infimum of s^2_f over 𝒰_0 will follow from the third item,
because C^∞_ c⊂𝒰_0.
Third item.
For ε > 0,
let ϱ_ε→ denote the mollifier
ϱ_ε(z) = ε^-1ϱ( ε^-1 z ),
ϱ(z) =
kexp( - 1/1 - z^2) if z≤ 1,
0 if z > 1,
with k>0 such that ∫_ϱ(z) ẓ = 1.
Let us introduce the smooth regularization of the absolute value function given by z_ε = ϱ_ε⋆ abs(z),
where abs(z) = |z| is the absolute value function.
Notice that z_ε = z for | z |≥ε and that
∀ z ∈,
0 ≤ |z|_ε - | z |≤ |0|_ε≤ε.
The first inequality follows from the convexity of the absolute value function,
and the second inequality comes from an application of the reverse triangle inequality:
|z|_ε - |z|
= ∫_( | z - y | - z) ϱ_ε (y) ỵ≤∫_y ϱ_ε(y) ỵ = |0|_ε.
Moreover,
let U_ε→ be the smooth biasing potential given by
U_ε(x) =
χ_ε(x) ( - log| f-I |_ε) if = ,
- log| f-I |_ε if = ,
where χ_ε = ϱ⋆ 1_[-ε^-1, ε^-1].
Since z_ε≥0_ε > 0 for all z ∈,
the function U_ε,
which is a regularization of (<ref>),
is well-defined everywhere and uniformly bounded from above.
The choice of U_*^ iid as the function we regularize is natural in view of <ref>
and the fact that the inequality in (<ref>) is an equality for U = U_*^ iid.
We now show that s^2_f[U_ε] converges to the lower bound in (<ref>) in the limit as ε→ 0.
We focus in this proof on the case where =,
but the reasoning applies verbatim to the case where =.
Using (<ref>), we have that
s^2_f[U_ε]
= U_ε/Z^2∫_^d(| f-I |/| f-I |_ε^χ_ε)^2 | f-I |_ε^χ_ε^-V.
Since 0 ≤χ_ε(x) ≤ 1 for all x ∈^d,
it holds for all ε∈ (0, 1) that
{| f-I |_ε^χ_ε ≤( | f-I | + 1 )^χ_ε≤f-I + 1
| f-I |^2/| f-I |_ε^χ_ε ≤| f-I |^2- χ_ε≤(| f-I | + 1)^2 - χ_ε≤(| f-I | + 1)^2.
.
Since f ∈ L^2(^-V) by assumption,
the right-hand sides of these inequalities are integrable.
Therefore, using dominated convergence,
we obtain that
U_ε = ∫_^d| f-I |_ε^χ_ε^-V∫_^d| f-I |^-V
and
∫_^d(| f-I |/| f-I |_ε^χ_ε)^2 | f-I |_ε^χ_ε^-V
= ∫_^d| f-I |^2/| f-I |_ε^χ_ε^-V∫_^df-I^-V.
and so s^2_f[U_ε] → s^*_f in the limit as ε→ 0.
Since s^2_f[U] ≥ s^*_f for all U ∈(^d) by (<ref>),
we deduce the statement.
First note that U_*^ iid∈𝒰_0.
In light of <ref>,
one may wonder whether s^2_f[U_*^ iid] = s^*_f.
That is to say, is U_*^ iid a minimizer of s^2_f in 𝒰_0?
Substitution into (<ref>) reveals that this is not always the case:
s^2_f[U_*^ iid] = Z^2/𝒵[U_*^ iid]^2(∫_^df-Iμ̣)^2 ≥ s^*_f.
The inequality in this equation is an equality if and only if 𝒵[U_*^ iid] = Z or,
equivalently, f^-1(I) has zero Lebesgue measure.
In particular, if f^-1(I) has positive Lebesgue measure,
then the biasing potential U_*^ iid does not achieve the lower bound s^*_f,
even though a minimizing sequence that asymptotically achieves the lower bound s^*_f can be constructed by regularizing this potential.
<Ref> in the appendix illustrates <ref>.
We explicitly construct in this example a potential function U ∈𝒰∖𝒰_0 such that s^2_f[U] = 0,
as well as a minimizing sequence (U_ε)_ε > 0 in 𝒰_0 such that s^2_f[U_ε] → s^*_f in the limit ε→ 0.
The example also illustrates that U_*^ iid is not necessarily a minimizer in 𝒰_0,
as mentioned in <ref>.
This section reveals the difficulties encountered when no regularity of the biasing potential U is assumed.
Similar difficulties will be encountered in the analysis of the MCMC estimator (<ref>).
§ MINIMIZING THE ASYMPTOTIC VARIANCE FOR A SINGLE OBSERVABLE
In this section, as in the previous one,
we consider a target-oriented approach:
we seek the optimal biasing potential U for a given observable f.
After presenting the mathematical framework in <ref>,
we first obtain in <ref> an expression for the asymptotic variance associated to the estimator (<ref>)
in terms of the solution to a Poisson equation where f appears on the right-hand side,
and subsequently address the problem of finding the optimal biasing potential U,
first in the one-dimensional setting in <ref>,
and then in the multi-dimensional setting in <ref>.
§.§ Mathematical setting
We will use the following functional spaces:
L^2_0(μ_U) =
{φ∈ L^2(μ_U) | ∫_^dφ μ̣_U = 0 },
H^1(μ_U) = {φ∈ L^2(μ_U) | ∇φ∈(L^2(μ_U))^d },
where μ_U is the probability measure defined in (<ref>).
In the i.i.d. setting, the condition (f-I) ^U ∈ L^2(μ_U) is necessary and sufficient to guarantee that a central limit theorem holds.
There does not exist such a simple condition in the MCMC setting,
and so, in most of this section,
we work under conditions which are only sufficient.
Specifically, we denote by 𝔘_0 = 𝔘_0(V, f) the set of biasing potentials that satisfy the following assumptions.
* The function U is smooth on ^d.
* The function ^-V-U is Lebesgue integrable over ^d.
* The probability measure μ_U satisfies a Poincaré inequality:
there exists R[U] > 0 such that
∀φ∈ H^1(μ_U) ∩ L^2_0(μ_U), φ_L^2(μ_U)^2 ≤1/R[U]∇φ_L^2(μ_U)^2.
* It holds that (f-I) ^U∈ L^2(μ_U).
* For any x_0∈^d, there exists a unique strong solution X_t to (<ref>) with X_0 = x_0.
In this section, the functions V and f are considered fixed data of the problem,
so the dependence of 𝔘_0 on these data is omitted in the notation.
However, in <ref> we will write 𝔘_0(V, f) to emphasize this dependence where necessary.
Just like the set 𝒰_0 in <ref>,
the set 𝔘_0 in this section contains “nice” biasing potentials.
Indeed, the asymptotic behavior of the estimator (<ref>) can be rigorously characterized for U ∈𝔘_0;
see <ref>.
If U ∈𝔘_0, then the estimator (<ref>) converges to I almost surely as T →∞ because,
by ergodicity,
μ^T_U(f) = 1/T∫_0^T (f ^U)(X_t) ṭ/1/T∫_0^T(^U)(X_t) ṭ∫_^d (f ^U) μ̣_U/∫_^d (^U) μ̣_U
= I.
In the case where =,
a Poincaré inequality of the form (<ref>) always holds provided that V+U is smooth.
When =, however,
the potential V+U must satisfy appropriate growth conditions to ensure that the inequality holds.
For sufficient conditions, see e.g. <cit.>.
We use the Poincaré inequality to establish the central limit theorem,
but there are other ways to obtain similar conclusions without directly using the Poincaré inequality,
for example by using results of Kipnis–Varadhan or Foster–Lyapunov (see <cit.> and references within).
When =,
the first item in <ref> implies all the other items,
and in this setting 𝔘_0 = C^∞(^d) = C^∞_ c(^d).
We denote by ℒ_U the infinitesimal generator on L^2(μ_U) of the Markov semigroup associated to (<ref>),
which is given on (^d) by
ℒ_U = - ∇ (V + U) ·∇ + = ^V+U∇· (^-V-U∇).
§.§ Asymptotic variance
The following lemma gives an expression of the asymptotic variance of the estimator μ^T_U(f) given in (<ref>)
in terms of the solution to a Poisson equation.
Suppose that U ∈𝔘_0.
Then there exists a unique distributional solution ϕ_U ∈ H^1(μ_U) ∩ L^2_0(μ_U) to
-ℒ_U ϕ_U = (f- I) ^U.
The solution ϕ_U is smooth and,
for Lebesgue almost all initial condition,
it holds that
√(T)( μ^T_U(f) - I)
(0, σ^2_f[U]),
where
σ^2_f[U] := 2U^2/Z^2∫_^d*∇ϕ_U^2 μ̣_U
= 2U^2/Z^2∫_^dϕ_U (f-I) ^U μ̣_U.
By density of (^d) in H^1(μ_U),
a function ϕ_U ∈ H^1(μ_U) ∩ L^2_0(μ_U) is a distributional solution to (<ref>) if and only if
∀φ∈ H^1(μ_U) ∩ L^2_0(μ_U), ∫_^d∇ϕ_U ·∇φ μ̣_U
= ∫_^d (f - I) ^U φ μ̣_U.
The validity of a Poincaré inequality for μ_U implies that
the function space H^1(μ_U) ∩ L^2_0(μ_U) endowed with the inner product
(φ_1, φ_2) ↦∫_^d∇φ_1 ·∇φ_2 μ̣_U
is a Hilbert space,
and that the right-hand side of (<ref>) is a bounded linear functional on this space.
Therefore, the Lax–Milgram theorem (or the Riesz representation theorem)
yields the existence of a unique solution ϕ_U in H^1(μ_U) ∩ L^2_0(μ_U).
Elliptic regularity theory <cit.>
then implies that ϕ_U∈ C^∞(^d).
From the definition (<ref>) of μ^T_U(f),
we have
√(T)( μ^T_U(f) - I)
= 1/√(T)∫_0^T ((f-I) ^U)(X_t) ṭ/1/T∫_0^T(^U)(X_t) ṭ.
The numerator converges in law to (0, 2∫_^d*∇ϕ_U^2 dμ_U),
for instance by <cit.>
(the setting there is ^d,
but for ^d, the argument using (<ref>) and the martingale central limit theorem,
that is essentially <cit.>,
works in the same way),
while the denominator converges almost surely to Z/U.
The claimed convergence in law then follows from Slutsky's lemma.
The last equality in (<ref>) follows from the definition (<ref>) of a weak solution.
From the weak formulation (<ref>) and the Poincaré inequality (<ref>),
we deduce the stability estimate
∇ϕ_U_L^2(μ_U)≤1/√(R[U])*(f - I) ^U_L^2(μ_U).
This standard estimate will be useful in the proof of <ref> in the appendix
(for a Poisson equation with a different right-hand side).
It is instructive to write the counterpart of <ref>
for the estimator in the right-most column and second row of <ref>,
which is based on evaluations of the solution to (<ref>) at discrete times:
μ_U^N :=
∑_n=0^N-1 (f ^U)(X_n τ)/∑_n=0^N-1 (^U)(X_n τ).
For this estimator,
we prove in <ref> that,
under appropriate conditions including <ref>,
√(N)( μ^N_U(f) - I)
(0, σ^2_f[U]),
with now
σ^2_f[U] = U^2/Z^2( 2 ∫_^dϕ_U (f-I) ^U dμ_U - ∫_^d| (f-I) ^U |^2 μ̣_U ),
where ϕ_U is the unique solution in L^2_0(μ_U) to
- ℒ_U ϕ_U = (f- I) ^U,
ℒ_U := ^τℒ_U - ℐ.
Here ^tℒ_U denotes the Markov semigroup corresponding to the stochastic dynamics (<ref>):
(^t ℒ_Uφ) (x) = (φ(X_t) | X_0 = x).
The asymptotic variance σ^2_f[U] converges to that for the i.i.d. setting
given in (<ref>) in the limit as τ→∞,
and it diverges in the limit as τ→ 0.
The latter is not surprising as the correlation between successive samples increases in this limit.
However,
since formally τϕ_U →ϕ_U in L^2_0(μ_U) in the limit as τ→ 0,
it holds that
τσ^2_f[U] σ^2_f[U].
§.§ Explicit optimal potential in dimension one
In the one-dimensional setting,
it is possible to write an explicit expression for the asymptotic variance σ^2_f[U],
from which an explicit lower bound on σ^2_f[U] can be obtained.
Our strategy in this section is the following:
* We first obtain an explicit expression for σ^2_f[U] for U ∈𝔘_0 (<ref>),
which we then rewrite in a different form σ^2_f[U] given in (<ref>).
* We then observe that σ^2_f[U] is defined more generally for U ∈𝔘⊃𝔘_0,
where 𝔘 is an appropriate superset of 𝔘_0,
noting that σ^2_f[U] is not necessarily the asymptotic variance of μ_U^T(f) for U ∈𝔘∖𝔘_0.
* Next, we show that σ^2_f admits an explicit minimizer U_* over 𝔘,
with associated minimum σ^*_f.
This is proved in <ref>.
* Finally,
using the expression of U_*,
we prove that σ^*_f is the infimum of the actual asymptotic variance σ^2_f[U] over 𝔘_0,
and that this infimum can be approached within the class of smooth biasing potentials with compact support.
This is the content of <ref>.
The main result of this section, <ref>,
and preceding auxiliary result, <ref>,
should be viewed as the counterparts in the MCMC setting of <ref> and <ref> in the i.i.d. setting.
For U ∈𝔘_0 and in dimension d=1,
the asymptotic variance (<ref>) writes
σ^2_f[U]
= 2U^2/Z^2∫_|(F-A_[U]) ^V+U|^2 μ̣_U,
where F:→ is given by
F(x) = ∫_0^x ( f(ξ)-I ) ^-V(ξ)ξ̣,
and
A_[U] =
- ∫_-∞^0( f-I ) ^-V if =,
∫_ F ^V+U/∫_^V+U if =.
Note that A_[U] is independent of U;
we shall henceforth drop the dependence in the notation.
It seems from (<ref>) that A_ and A_[U] have very different expressions.
In fact, the constant A_ may be obtained as a limit of A_[U] for an increasingly large torus;
see <ref>.
In dimension one, the Poisson equation (<ref>) reads
-^V+U(^-(V+U)ϕ_U')' = (f-I) ^U.
By integration of (<ref>),
it holds that
ϕ_U'(x) = -(∫_0^x ( f(ξ)-I ) ^-V(ξ)ξ̣- A) ^(V+U)(x)
= -( F(x) - A ) ^(V+U)(x),
and so
ϕ_U(x) = B - ∫_0^x( F(ξ) - A ) ^(V+U)(ξ) ξ̣,
for some constants A ∈ and B ∈.
The requirement that ϕ_U∈ H^1(μ_U) enables to determine the constant A:
* When =, the embedding H^1(μ_U) ⊂ C() gives that
ϕ_U(-π) = ϕ_U(π), which leads to the equation for A_[U] in (<ref>).
* When =,
the requirement that ϕ_U∈ H^1(μ_U) implies that
A
= lim_x →∞ F(x),
where
lim_x →∞ F(x)
= ∫_0^∞( f(x)-I ) ^-V(x) x̣
= - ∫_-∞^0( f(x)-I ) ^-V(x) x̣,
because otherwise ϕ_U' in (<ref>) is not in L^2(μ_U).
Indeed, assume for contradiction that
lim_x →∞ F(x)
=: L ≠ A.
(The limit exists because (f-I)^-V∈ L^1() by <ref>.)
Then there is K ∈ such that
inf_x ≥ KF(x) - A≥1/2L - A,
and so by (<ref>), it holds that
∫_|ϕ_U' |^2 ^-(V+U) x̣≥1/4L - A^2 ∫_K^∞^V+U x̣.
The right-hand side of this equation is infinite because,
by the Cauchy–Schwarz inequality,
+ ∞ = ∫_K^∞√(^V+U)√(^-V-U) x̣≤∫_K^∞^V+U x̣∫_K^∞^-V-U x̣.
Equation (<ref>) is then obtained by substitution of (<ref>) in (<ref>).
Once A in (<ref>) has been determined,
the value of B can be obtained from the condition that ϕ_U∈ L^2_0(μ_U) is mean-zero.
This constant is not required for our purposes,
because it cancels out in the formula (<ref>) for the asymptotic variance,
and so its explicit expression is omitted.
<Ref> proves that,
although choosing U in such a manner that the free energy associated with a reaction coordinate is constant
– an approach known in the literature as free energy biasing – alleviates metastability <cit.>,
this strategy is in general not optimal in terms of asymptotic variance for a specific observable.
For all U ∈𝔘_0, the right-hand side of (<ref>) coincides,
both when = and when =,
with
σ^2_f[U] :=
2U/Z^2inf_A ∈∫_F - A^2 ^V+U .
For all U ∈𝔘_0,
it holds that σ^2_f[U] = σ^2_f[U].
When =,
the infimum in (<ref>) is achieved for A = A_,
because the integral is infinite for any other value of A.
Likewise, when =
the infimum in (<ref>) is achieved for A = A_[U],
because the mean under the probability measure proportional to ^V+U is the approximation by a constant in the L^2(^V+U) norm;
see, for instance, <cit.>.
With the convention that 0 · + ∞ = + ∞· 0 = 0,
the right-hand side of (<ref>) makes sense as an element of ∪{∞} for all U ∈𝔘,
where 𝔘 = 𝔘(V, f) is the set of measurable functions U→∪{∞} such that
the following assumptions are satisfied:
* It holds that ^-V - U∈ L^1(^d) and Z[U] > 0,
so that μ_U is well defined.
* The condition (<ref>) is satisfied.
We emphasize that 𝔘 is a proper superset of 𝔘_0;
it contains elements which violate <ref>.
For biasing potentials not in 𝔘_0,
the quantity (<ref>) is not in general an asymptotic variance.
In particular,
it is possible to construct examples where σ^2_f[U] is zero even though σ^2_f[U] > 0,
as we illustrate in <ref>.
Consider the setting where V→ is zero and f [-π,π] → is given by
f(x) =
(x) if x≥π/2,
0 otherwise.
,
(x) :=
1 if x > 0,
0 if x = 0,
-1 if x < 0.
Here we identify [-π, π] with its image under the quotient map →.
If U is a potential such that (i) there exists a unique strong solution to (<ref>) with initial condition X_0 = 0
and (ii) this solution satisfies X_t ∈ (-π/2, π/2) with probability 1 for all times,
then (<ref>) is a well defined estimator with zero asymptotic variance.
However σ^2_f[U] > 0,
which can be viewed from (<ref>) and is confirmed in <ref> hereafter.
Although σ^2_f does not in general correspond to an asymptotic variance,
obtaining a bound from below on σ^2_f over 𝔘 will be useful in order to motivate the proof of <ref>,
just like <ref> proved useful for establishing <ref> in the i.i.d. setting.
It holds that
min_U ∈𝔘σ^2_f[U] =
2/Z^2(∫_| F(x) - A^*_|x̣)^2 =: σ^*_f,
with A^*_ := A_ and
A^*_ := sup{ A ∈ : ∫_ (F-A) ≥ 0 }.
In addition, the infimum is achieved for
U = U_*(x) := - V(x) -log*F(x) - A^*_∈𝔘.
In this case
^-V-U_*∝*F(x) - A^*_.
It is sufficient to show that, for all A ∈,
2U/Z^2∫_F - A^2 ^V+U≥2/Z^2(∫_| F(x) - A^*_| x̣)^2.
If the left-hand side of (<ref>) is infinite,
then the inequality is trivially satisfied.
On the other hand,
if the left-hand side is finite,
in which case the set on which F-A^2 ^V+U = ∞ is of measure zero,
then we have by the Cauchy–Schwarz inequality that
2U/Z^2∫_F- A^2 ^V+U = 2/Z^2∫_^-V-U∫_F-A^2 ^V+U≥2/Z^2( ∫_F - A)^2.
In the case =,
the right-hand side is finite only if A = A_,
which leads to (<ref>).
In the case =,
the inequality (<ref>) is obtained by noting that
∫_| F - A |
= | F(X) - A |,
where X ∼𝒰() is a random variable uniformly distributed on the torus.
It is well known,
see for example <cit.>,
that the expectation on the right-hand side is minimized for any A that is a median of F(X).
Here F is continuous, so the median of F(X) is unique and given by A^*_,
which implies that (<ref>) holds.
The fact that the lower bound is achieved for U in (<ref>)
follows from the inequality
σ^2_f[U_*]
= 2U_*/Z^2inf_A ∈∫_| F-A |^2 ^V+U_*
≤2U_*/Z^2∫_| F - A_^* |^2 ^V+U_*
= 2/Z^2( ∫_| F - A_^* |)^2,
which concludes the proof.
The singularities in the biasing potential U_* coincide with zeros of the function F(x) - A^*_.
Consider for simplicity the case where =.
If x_* denotes a zero of the function F(x) - A_^*,
then it holds by definition of F(x) that
0 = F(x_*) - A^*_
= ∫_-∞^x_*(f(x) - I) ^-V(x) x̣.
Rearranging this equation,
we obtain
∫_-∞^x_* f(x) ^-V(x) x̣/∫_-∞^x_*^-V(x) x̣ = I.
In other words,
the average of f with respect to the measure μ restricted to [-∞,x_*]
coincides with its average with respect to μ over the real line.
When singular, the biasing potential (<ref>) effectively divides the domain into regions that suffice for the estimation of I.
Several numerical experiments illustrating this behavior are presented in <ref>.
Equation (<ref>) implies that A_^* is the median associated with F(X),
where X is a random variable with uniform distribution over .
Just as A_ is obtained as a limit of A_ for an increasingly large torus,
so too A_^* = A_ is recovered as a limit of A^*_;
see <ref>.
The potential U_* defined by (<ref>) does not necessarily satisfy <ref>,
and the measure μ_U_* may not have full support.
However, regularizing U_* enables to show that σ^*_f is the infimum of the asymptotic variance σ^2_f[U],
not only over 𝔘_0,
but also over the smaller subset C^∞_ c⊂𝔘_0 of smooth and compactly supported biasing potentials.
This is the content of the following result.
<Ref> after the proof summarizes the main results obtained in this section
and presents a comparison with the i.i.d. setting.
Suppose that 0 ∈𝔘_0.
Then,
* It holds that
inf_U ∈𝔘_0σ^2_f[U] = σ^*_f.
* It holds that
inf_U ∈ C^∞_ c()σ^2_f[U] = σ^*_f.
Since σ^2_f[U] = σ^2_f[U] for U ∈𝔘_0 and C^∞_ c(^d) ⊂𝔘_0,
<ref> and the second statement of <ref> imply the first statement.
In order to prove the second statement,
we use the same notation in this proof as in <ref>.
Let U_ε→ be the smooth biasing potential given by
U_ε(x) =
- χ_ε(x) (V(x) + log| F(x) - A^*_|_ε) if = ,
- V(x) - log| F(x) - A^*_|_ε if = ,
where for ε∈ (0, 1), χ_ε = ϱ⋆ 1_[-ℓ_ε, ℓ_ε] with ℓ_ε = ε^-1/2 - 1.
(This choice of ℓ_ε enables to write the bound in (<ref>) below.)
The probability distribution μ_U_ε,
with density proportional to ^-V-U_ε,
satisfies a Poincaré inequality.
This follows from the fact that ^-V-U_ε is uniformly bounded from below when =,
and from the classical Holley–Stroock biasing argument when =;
see <cit.>, as for example reviewed in <cit.>.
Therefore U_ε∈𝔘_0 and so,
by <ref>,
the associated asymptotic variance is given by
σ^2_f[U_ε]
= 2U_ε/Z^2∫_(F-A_ε)^2 ^V+U_ε,
where A_ε := A_[U_ε] is given by (<ref>)
with U = U_ε.
We now prove, separately for = and =,
that
lim_ε→ 0U_ε→∫_F - A^*_ and lim sup_ε→ 0∫_(F-A_ε)^2 ^V+U_ε≤∫_| F(x) - A^*_| x̣.
Given these results,
taking the limit superior as ε→ 0 in (<ref>) gives
lim sup_ε→ 0σ^2_f[U_ε]
≤2/Z^2(∫_| F(x) - A^*_| x̣)^2.
Since the right-hand side is a lower bound on σ^2_f[U_ε] = σ^2_f[U_ε] by (<ref>),
the result will be proved.
Case =.
In this setting,
it holds by dominated convergence,
with an argument similar to the one used to prove the third item of <ref>,
that
U_ε
= ∫_| F - A^*_|_ε∫_| F - A^*_|.
In addition, since A_ε is the average of F over with respect to the probability measure with density proportional to ^V + U_ε,
it holds for all ε∈ (0, 1) that
∫_| F-A_ε|^2 ^V+U_ε = inf_C ∈∫_| F - C|^2 ^V+U_ε
≤∫_| F - A_^* |^2 ^V+U_ε
= ∫_| F - A_^* |^2/| F - A_^* |_ε≤∫_| F - A_^* |,
which enables to conclude.
Case =.
The numerator of the fraction on the right-hand side of (<ref>) can be written as
U_ε = ∫_exp( -V(x) + χ_ε(x) (V(x) + log| F(x) - A^*_|_ε) ) x̣
= ∫_exp( (χ_ε - 1 )V )F - A^*__ε^χ_ε.
By convexity of the exponential function,
it holds that
exp( (1 - χ_ε) (-V) + χ_εlogF - A^*__ε)
≤ (1 - χ_ε) ^-V + χ_εF - A^*__ε
≤^-V + χ_ε(F - A^*_ + ε)
≤^-V + F - A^*_ + εχ_ε.
Since ε = (ℓ_ε + 1)^-2,
all three terms on the right-hand side are dominated by an integrable function over independent of ε,
because
∀ x ∈, εχ_ε(x)
≤ 1_[-ℓ_ε- 1, ℓ_ε+ 1](x)/(ℓ_ε+ 1)^2≤min{1, 1/x^2}.
Therefore,
by dominated convergence,
we deduce from (<ref>) that U_ε→∫_F - A^*_ in the limit as ε = (ℓ_ε + 1)^-2→ 0.
For the integral on the right-hand side of (<ref>),
noting that A_ε = A^*_ and recalling that this constant is independent of U,
we have that
∫_ ( F - A_ε )^2 ^V+U_ε = ∫_F - A^*_^2exp((1 - χ_ε) V + χ_ε(- logF - A^*__ε) )
≤∫_F - A^*_^2( (1 - χ_ε) ^V + χ_ε/F - A^*__ε)
≤∫_∖ B_ℓ_ε-1F - A^*_^2^V
+ ∫_F - A^*_,
where we used the convexity of the exponential function and the notation B_ℓ_ε-1 = [-ℓ_ε+1, ℓ_ε-1].
Since σ_f^2[0] is finite, so is ∫_F - A^*_^2 ^V,
implying that the first term on the right-hand side of (<ref>) converges to 0 in the limit ε→ 0.
The results in <ref> parallel the second and third items in <ref>.
We do not aim at rigorously establishing an analogue of the first item in <ref>,
which would require analyzing the well-posedness of (<ref>) and the properties of the estimator (<ref>) when the biasing potential is irregular and unbounded.
Since U_* ∈𝔘 defined in (<ref>) is a minimizer of σ^2_f,
and since the lower bound σ^*_f on σ^2_f[U] may be approached by regularizing this potential,
we often refer to U_* as the optimal biasing potential.
This is, of course, a slight abuse of terminology given that U_* is not in general a minimizer of the actual asymptotic variance σ^2_f[U],
neither on 𝔘 (because an asymptotic variance smaller than σ^*_f can sometimes be achieved) nor on 𝔘_0 (because it does not hold that U_* ∈𝔘_0 in general).
To conclude this section, we note that the parallel between the i.i.d. and MCMC settings is not perfect:
while σ^2_f[U] and σ^2_f[U] coincide for U ∈𝔘_0 in the MCMC setting,
s^2_f[U] and s^2_f[U] do not generally coincide for U ∈𝒰_0 in the i.i.d. setting.
§.§ Optimal potential in the multi-dimensional setting via steepest descent
In the multi-dimensional setting,
obtaining an explicit expression for the optimal biasing U is not possible.
However, analogously to <cit.>, the functional derivative of the asymptotic variance with respect to the biasing potential can be expressed
in terms of the solution to a Poisson equation.
This enables a numerical strategy based on a steepest descent for finding a good biasing potential.
§.§.§ Computation of the functional derivative
In the following,
the directional derivative of a functional E: C^∞(^d) → at U∈ C^∞(^d)
in the direction δ U ∈(^d) is denoted by
Ẹ[U] ·δ U = lim_ε→ 01/ε(E[U+εδ U] - E[U]),
whenever the limit exists.
Suppose that U ∈𝔘_0 and let ϕ_U be as in <ref>.
Then for all δ U ∈(^d),
it holds that
1/2σ̣^2_f[U] ·δ U
:= 1/2lim_ε→ 01/ε(σ^2_f[U + εδ U] - σ^2_f[U])
= U^2/Z^2∫_^dδ U ( *∇ϕ_U^2 - ∫_^d*∇ϕ_U^2 μ̣_U) μ̣_U.
We first rewrite (<ref>) as
σ^2_f[U]
= 2U^2/Z^2∫_^dϕ_U (f-I) ^U μ̣_U
= 2U/Z^2∫_^dϕ_U (f-I)^-V,
so that the only factors depending on U are U and ϕ_U.
By definition of the functional derivative,
using (<ref>) for the first integral term on the right-hand side,
we have
1/2σ̣^2_f[U] ·δ U
= U·δ U/Z^2∫_^d*∇ϕ_U^2 ^-V-U
+ lim_ε→ 0U/ε Z^2∫_^d (ϕ_U + εδ U - ϕ_U) (f-I) ^-V,
where ϕ_U + εδ U is the solution to the perturbed Poisson equation
- ℒ_U + εδ Uϕ_U + εδ U
= (f- I) ^U + εδ U.
The function (f-I)^U+εδ U has zero mean with respect to μ_U + εδ U.
By <ref> and the fact that δ U ∈(^d),
it holds that (f-I)^U+εδ U∈ L^2(^-V-U-εδ U)
and that μ_U+εδ U satisfies a Poincaré inequality,
by the Holley–Stroock theorem.
Consequently, there exists a unique solution in L^2_0(^-V-U-εδ U) to (<ref>) by <ref>.
A simple calculation using the fact that δ U∈(^d) gives
U·δ U = - ∫_^dδ U ^-V-U.
For the second term on the right-hand side of (<ref>),
we have by <ref> that
1/ε∫_^d (ϕ_U + εδ U - ϕ_U) (f-I) ^-V
= - 1/ε∫_^d (ϕ_U + εδ U - ϕ_U) (ℒ_U ϕ_U) ^-V-U
= 1/ε∫_^d∇(ϕ_U + εδ U - ϕ_U) ·∇ϕ_U ^-V-U∫_^d∇ψ_U,δ U·∇ϕ_U ^-V-U
= ∫_^d (ℒ_U ψ_U,δ U) ϕ_U ^-V-U
= ∫_^d(- ^U+V∇·(^-U-Vδ U ϕ_U)) ϕ_U ^-V-U
= ∫_^dδ U |∇ϕ_U |^2 ^-V-U,
where ψ_U,δ U is the solution to the Poisson equation (<ref>).
The equalities before and after the limit follow from the definitions of ϕ_U and ψ_U,δ U as weak solutions to (<ref>) and (<ref>),
respectively.
The last inequality is obtained by integration by parts,
which is justified because δ U is compactly supported.
Combining this equation with (<ref>) and (<ref>),
we deduce (<ref>).
Before presenting the numerical method for approaching the optimal biasing potential U,
we mention two corollaries of <ref>.
Suppose that U ∈𝔘_0.
Then U is a critical point of the asymptotic variance viewed as a functional of U
if and only if the corresponding solution to the Poisson equation (<ref>) satisfies
*∇ϕ_U^2 = ∫_^d*∇ϕ_U^2 μ̣_U.
In other words, the norm of ∇ϕ_U is constant over ^d.
Let =.
Then there is no biasing potential U ∈ C^∞(^d) that is a critical point of the asymptotic variance σ^2_f[U].
Assume for contradiction that there is a smooth biasing potential U for which (<ref>) holds.
By elliptic regularity, the corresponding solution ϕ_U to the Poisson equation (<ref>) is also smooth and so,
by the extreme value theorem,
it attains its minimum at some point in ^d, where ∇ϕ_U vanishes.
By (<ref>), this implies that ∇ϕ_U = 0 for all x ∈^d and, therefore, - ℒ_U ϕ_U = 0 = (f-I) ^U.
This is a contradiction because, by <ref>,
the observable f is not everywhere equal to I.
<Ref> highlights a limitation of the target-oriented approach taken in this section,
as singular potentials are impractical at the numerical level and unlikely to be of any use for different observables.
This motivates the approach taken in <ref>,
which aims at finding a biasing potential U that leads to a reduction in variance for not just one but a family of observables.
§.§.§ Steepest descent method
To conclude this section,
we present an iterative approach for approximating a minimizer of σ^2_f[U].
We focus on the case where = for simplicity,
but the approach is easy to generalize to other settings (see <ref>).
Since an expression for the functional derivative of σ^2_f[U] is available by <ref>,
we employ a method based on steepest descent.
Each step of the method may be decomposed into three stages:
* First, an approximate solution to the Poisson equation (<ref>) is computed.
A number of numerical methods can be employed to this end.
Given that the optimal potential always exhibits singularities when =,
we opt for a finite difference approach rather than,
for example, a spectral method <cit.>.
The details of the finite difference method are presented in <ref>,
together with a convergence proof in the setting where U is regular.
* Then, from the solution to (<ref>),
an approximation of the gradient G of the asymptotic variance is calculated based on <ref>.
* Finally, the potential U is updated according to U U - η G,
where η is found by backtracking line search following Armijo's method <cit.>.
These steps are repeated until the L^2() norm of the gradient is sufficiently small.
A couple of comments are in order.
First, the expression of G depends on the considered Hilbert functional space.
By (<ref>), the functions
G_L^2(^-V-U) =
U/Z^2( *∇ϕ_U^2 - ∫*∇ϕ_U^2 μ̣_U),
G_L^2(^-V) =
U/Z^2( *∇ϕ_U^2 - ∫*∇ϕ_U^2 μ̣_U) ^-U,
G_L^2() =
U/Z^2( *∇ϕ_U^2 - ∫*∇ϕ_U^2 μ̣_U) ^-U-V,
are all ascent directions for σ^2_f[U],
corresponding to the gradients in L^2(^-V-U), in L^2(^-V) and in L^2(),
respectively.
Of these three options, the latter two are better suited for use in an optimization method.
Indeed, employing the L^2(^-V-U) derivative would lead to a change of metric with each update of U,
which precludes the use of methods that rely on information from multiple steps,
such as the Barzilai–Borwein method <cit.>.
In the numerical experiments presented in <ref>,
we use (<ref>),
but good results can also be obtained by using (<ref>).
Second,
the gradient needs to be discretized in practice.
In order to avoid convergence issues,
it is desirable that the discretized gradient is itself the gradient of a function.
Therefore, we use the discretization given in (<ref>),
which is guaranteed to be the gradient of an appropriate discretization of the asymptotic variance;
see <ref>.
§ MINIMIZING THE ASYMPTOTIC VARIANCE FOR A CLASS OF OBSERVABLES
Assume that the set of observables of which we want to compute the expectation is well described by a Gaussian random field
f = ∑_j=1^J√(λ_j) u_j f_j,
u_j ∼𝒩(0, 1),
λ_j ∈ (0, ∞),
where (f_j)_1≤ j ≤ J are given functions from ^d to and the random variables (u_j)_1 ≤ j ≤ J are independent.
This equation defines a probability distribution ℱ on the space of observables
as the pushforward of the finite-dimensional Gaussian measure 𝒢 = 𝒩(0, _J) on ^J;
the probability measure ℱ assigns a probability 1 to (f_1, …, f_J).
One may wonder whether it is possible to minimize the average asymptotic variance for observables drawn from this distribution.
For clarity, we denote by _ℱ expectations with respect to observables.
Within this section,
we also use the notation
𝔘_0 = ⋂_j=1^J𝔘_0(V, f_j),
𝔘 = ⋂_j=1^J 𝔘(V,f_j),
where 𝔘_0(V,f_j) and 𝔘(V,f_j) are the sets defined before <ref> and after (<ref>), respectively.
These sets are assumed to be non-empty in this section.
Denoting by ϕ the solution in H^1(μ_U) ∩ L^2_0(μ_U) to
- ℒ_U ϕ = (f - I) ^U,
and assuming that U ∈𝔘_0,
we have by <ref> that
_ℱ[σ^2_f[U] ]
=_ℱ[ 2U/Z^2∫_^dϕ (f-I) ^-V]
= 2U/Z^2 _𝒢[ ∑_j=1^J∑_k=1^J u_j u_k √(λ_j λ_k)∫_^dϕ_k (f_j-I_j) ^-V],
where, for 1 ≤ j ≤ J,
the function ϕ_j is the unique solution in H^1(μ_U) ∩ L^2_0(μ_U) to the Poisson equation -ℒ_U ϕ_j = (f_j - I_j) ^U,
with I_j := μ(f_j).
Since _𝒢 [u_j u_k] = δ_jk,
we obtain by rearranging the previous expression that
σ^2[U] :=
_ℱ[σ^2_f[U] ]
= 2U/Z^2∑_j=1^Jλ_j ∫_^dϕ_j (f_j-I_j) ^-V
= ∑_j=1^Jλ_j σ^2_f_j[U].
Therefore, minimizing the expectation _ℱ [ σ^2_f ] amounts to minimizing the sum on the right-hand side of (<ref>).
§.§ Optimal biasing potential in the one-dimensional setting on the real line
In the one-dimensional setting with =,
an explicit expression for the infimum of the asymptotic variance can be obtained,
similar to that <ref> in the case of a single observable.
In order to state a precise result,
we introduce the notation
σ^2[U]
= ∑_j=1^Jλ_j σ^2_f_j[U],
where σ^2_g for an observable g was defined in (<ref>).
Let us recall that, by the reasoning in <ref>,
the quantity σ^2[U] coincides with σ^2[U] in (<ref>) when U ∈𝔘_0,
but σ[U] is well-defined more generally for any U ∈𝔘.
In <ref> below,
we give a bound from below on σ^2[U] in the particular case where d = 1 and =.
Before presenting this result,
we introduce the notation
F_j(x) = ∫_0^x(f_j(ξ)-I_j) ^-V(ξ)ξ̣,
A_,j = - ∫_-∞^0(f_j(ξ)-I_j) ^-V(ξ)ξ̣.
In the rest of this section,
F_j and A_,j always appear together as F_j - A_,j,
which can be rewritten as an integral over (-∞, x] with the same integrand as in the definition of F_j,
that is to say
F_j(x) - A_,j = ∫_- ∞^x (f_j(ξ) - I_j) ^-V(ξ) ξ̣.
For the sake of conciseness,
we could introduce a new notation to refer to F_j - A_,j,
but we refrain from doing so in order to keep the notation consistent with that used in <ref>.
Assume that d = 1 and =.
Then, for all U ∈𝔘, it holds that
min_U ∈𝔘σ^2(U)
= 2/Z^2( ∫_√(∑_j=1^Jλ_j | F_j - A_,j|^2 ))^2.
The minimum is achieved for
U = U_* := - V -log( √(∑_j=1^Jλ_j | F_j - A_,j|^2)) ∈𝔘.
In this case
^-V-U_* is proportional to √(∑_j=1^Jλ_j | F_j - A_,j|^2).
We first show that σ^2[U] is bounded from below by the right-hand side of (<ref>).
This is trivial if σ^2[U] is infinite,
so we assume from now on that σ^2[U] < ∞.
Using the definition of σ^2[U],
we have
σ^2[U] =
∑_j=1^Jλ_j σ^2_f_j[U]
= 2U/Z^2∫_( ∑_j=1^Jλ_j | F_j - A_,j|^2 ) ^V+U.
Here, we used that the infimum in the definition (<ref>) of σ^2_f_j is achieved for A = A_,j,
as explained in <ref>.
Let us introduce the notation
G = √(∑_j=1^J∫_λ_j | F_j - A_,j|^2).
Since σ^2[U] < ∞ by assumption,
the set over which G^2 ^V+U takes an infinite value is of zero Lebesgue measure.
Therefore, by the Cauchy–Schwarz inequality,
σ^2[U]
= 2U/Z^2∫_| G |^2 ^U+V≥2/Z^2(∫_| G |√(^U+V)√(^-V-U))^2
= 2/Z^2(∫_ G )^2.
The claim that U_* in (<ref>) achieves the lower bound can be verified by substitution in (<ref>).
Notice that, unless the functions (F_j - A_,j)_1≤ j ≤ J share a common root,
the optimal biasing potential U_* in (<ref>) is a smooth function.
In the same way that we obtained <ref> from <ref>,
we deduce from <ref> the following result.
The proof is a simple adaptation of that of <ref>, so we omit it.
Assume that d = 1 and =,
and suppose that 0 ∈𝔘_0.
Then
inf_U ∈𝔘_0σ^2[U]
= inf_U ∈ C^∞_ c()σ^2[U]
= 2/Z^2( ∫_√(∑_j=1^Jλ_j | F_j - A_,j|^2 ))^2.
§.§ Numerical optimization
A result similar to <ref> is not easily available when d=1 and =,
because the constant A_ given in (<ref>) depends on U.
In this case or in the multi-dimensional setting,
one can resort to a steepest descent approach in order to find the optimal biasing potential.
It is easy to prove,
based on <ref>,
that the functional derivative of σ^2[U] is given by
1/2σ̣^2[U] ·δ U
= U^2/Z^2∫_^d( δ U - ∫_^dδ U μ̣_U ) ( ∑_j=1^Jλ_j *∇ϕ_j^2 ) μ̣_U.
The approach presented in <ref> can then be applied mutatis mutandis.
Numerical experiments illustrating the potential found as a result of this procedure are presented in <ref>.
§.§ Free energy biasing
In the molecular dynamics literature,
variance reduction over a compact state space is often achieved by free energy biasing,
a heuristic approach which,
in the absence of coarse graining via a reaction coordinate,
amounts to setting U = -V;
see <cit.> and the references therein.
To conclude this section, we address the following related question:
is there a probability distribution ℱ on observables such that U = -V is a minimizer of the average asymptotic variance σ^2[U],
i.e. for which energy biasing is optimal?
While we will not be able to provide a definite answer to this question,
we shall prove in <ref> that,
for an appropriate probability measure on observables, the biasing U = -V corresponds to a critical point of σ^2[U].
This does not imply that U = -V is necessarily optimal,
because σ^2[U] is not convex in general;
see <ref> in <ref>.
Suppose that = with d=1 and assume,
for J ∈ 2_>0,
that (f_j, λ_j)_1 ≤ j ≤ J
are the J first eigenfunctions and eigenvalues of the operator 𝒦 = ^V(- + τ^2 𝕀)^-α^-V,
where α∈ (0, ∞) and τ∈ are parameters of the random field
and 𝒦 is viewed as a compact self-adjoint operator on the following space of functions defined on :
{ f ∈ L^2(^-2V) ∫_ f ^-V = 0 }.
Then the average asymptotic variance σ^2[U] admits a critical point for U = -V.
The eigenfunctions of the operator 𝒦 are given by
f_j(x) = ^V{ sin(j + 1/2x), if j is odd,
cos(j/2x), if j is even.
.
Note that I_j = 0 for all 1 ≤ j ≤ J.
When U = -V,
the generator of the Markov semigroup associated with (<ref>) is just the Laplacian operator.
Therefore, for a given j ∈{1, …, J},
the solution to the Poisson equation - ℒ_U ϕ_j = (f_j - I_j) ^U is given by
ϕ_j(x) = ⌈j/2⌉^-2 f_j(x).
Since λ_j = λ_j+1 for all odd values of j
and since sin^2 + cos^2 = 1,
we deduce that the sum on the right-hand side of (<ref>) is constant,
implying that the functional derivative of σ^2[U] is zero when evaluated at the biasing potential U = -V.
The choice of the operator 𝒦 is motivated by the form (<ref>) of the desired observables,
which is itself motivated by the fact that these observables lead to Poisson equations with explicit trigonometric solutions when U = -V.
In this section,
we assumed for simplicity that the random observable admitted a finite expansion of the form (<ref>).
The results we obtained could in principle be extended to the case of a more general Gaussian field,
with an infinite Karhunen–Loève series.
For background on Gaussian variables in infinite dimension,
see for example <cit.>.
The reference <cit.> is also useful for understanding the regularity of infinite series of the form (<ref>).
§ EXAMPLES AND NUMERICAL EXPERIMENTS
We begin in <ref> by presenting examples and numerical experiments in dimension 1.
Then, in <ref>,
we present numerical experiments for the case where the state space is ^2 and the optimal biasing potential is approximated by steepest descent.
Finally, in <ref>,
we illustrate the approach proposed in <ref>.
§.§ One-dimensional examples
In this subsection,
we present a few examples illustrating the optimal biasing potential U for various observables and underlying potentials V.
In all the figures,
the optimal potential depicted is calculated numerically using the steepest descent approach presented in <ref>.
It is apparent in <ref> that this approach indeed yields the optimal biasing potential (<ref>),
an explicit expression of which is given in these examples.
The first few examples aim at illustrating settings where the optimal biasing potential exhibits different levels of singularity.
The optimal potential is smooth in <ref>,
it exhibits two singularity points in <ref>,
and it blows up over a whole interval in <ref>.
In these examples, the reference probability measure is unimodal,
and the gain in asymptotic variance obtained from using the optimal potential is small.
Finally, in example <ref>,
a multi-modal reference probability measure is considered,
and it is observed that importance sampling enables a considerable decrease in asymptotic variance in this case.
Without loss of generality,
we normalize in the figures the potentials U so that the minimum value of V+U over the domain considered is 0.
Assume that = and f = V'.
Then I = 0 and F(x) - A_ = ^-V(x);
in this case,
the optimal biasing potential (<ref>) is U_* = 0.
Assume that =, V = 0 and f(x) = cos(x).
Then I = 0 and F(x) = sin(x).
The constant A^*_ in (<ref>) is 0,
and so the optimal biasing potential (<ref>) is given by
U_*(x) = - logsin(x).
This potential is illustrated in <ref>
together with the corresponding measure μ_U_*.
It is possible to show that μ_U_* does not satisfy a Poincaré inequality.
Indeed, let g_ε→ be given by
g_ε(x) = (x)log(sin(| x |) +ε/ε),
where we identify the torus with [-π, π].
The function g_ε is odd,
so its mean with respect to μ_U_* is zero,
and we observe that
∫_g_ε'(x)^2 sin(x)x̣ = 2∫_0^πsin(x)(cos(x))^2/(sin(x) + ε)^2 x̣
≤ 2∫_0^πcos(x)/sin(x) + ε x̣
= 4∫_0^π/2cos(x)/sin(x) + ε x̣
= 4 log(1 + ε/ε),
whereas
∫_g_ε(x)^2 sin(x) x̣≥ 2∫_π/4^3π/4g_ε(x)^2 sin(x) x̣≥πsin(π/4) g_ε(π/4)^2 .
Since the latter squared norm diverges faster than the former in the limit as ε→ 0,
it is impossible that a Poincaré inequality holds.
This can also be verified via Muckenhoupt's criterion;
see, for example, <cit.>.
Notice that the singularities divide the domain into the two regions [-π, 0] and [0, π],
but the average of f with respect to ^-V conditioned to either region is equal to I = 0,
in accordance with the discussion in <ref>.
When U = 0,
the solution to the Poisson equation (<ref>) is given by ϕ(x) = cos(x) and Z = 1,
so
the asymptotic variance (<ref>) is given by
σ^2_f[U = 0] = 2 ∫_-π^πsin(x)^2 x̣/2 π = 1.
The infimum (<ref>) of the asymptotic variance,
on the other hand,
is given by
2/Z^2(∫_| F(x) - A^*_|x̣)^2 =
2 (∫_|sin(x) |x̣/2 π)^2
= 8/π^2 = 0.810…
The optimal biasing potential therefore leads a reduction in variance of about 19%.
As we show above in <ref>,
singularities in the optimal biasing potential are inevitable when the state space is the torus in any dimension.
Moreover, it is possible to construct examples where the optimal measure μ_U_* is supported on a subset of ,
as shown in the following example.
Consider the case where = and V(x) = 0,
with the observable
f(x) =
sin(4 x) if x≥π/2,
0 otherwise.
Then I = 0 and
F(x) =
∫_0^x f(ξ) ^-V(ξ) ξ̣=
1/4(- 1 + cos(4 x)) if x ≤ -π/2,
1/4(1 - cos(4 x)) if x ≥π/2,
0 otherwise.
The constant A^*_ in (<ref>) is again zero,
and so the optimal biasing potential is U_*(x) = - log |F(x)|.
The optimal biasing potential and the corresponding measure μ_U_* for this example
are depicted in <ref>.
As the figure illustrates,
the Lebesgue density of μ_U_* with respect to the Lebesgue measure is zero over the interval [-π/2, π/2].
To conclude this section,
we present an example where using the biasing potential U_* leads to a significant decrease of the variance.
Consider again the case where =,
with this time V(x) = 5cos(2 x) and the observable f(x) = sin(x).
The reference dynamics, i.e. the dynamics (<ref>) with U = 0,
is metastable in view of the high potential barrier;
the asymptotic variance when U=0 for the observable considered,
estimated numerically from (<ref>),
is equal to 3459 after rounding to the closest integer.
The optimal total potential V + U_* and probability distribution μ_U_* are depicted in <ref>.
The asymptotic variance associated with U_* is about 3.64,
roughly 1000 times smaller than the asymptotic variance for the reference dynamics.
The numerical values of the asymptotic variance for the examples considered in this section and different choices of U are summarized in <ref>.
We also present in this table,
for each of the examples, the value of the asymptotic variance corresponding to case where U = - θ V,
for the value of θ that yields the largest variance reduction.
This approach is found to yield a variance reduction close to the optimal one in the setting of <ref>.
§.§ Two-dimensional examples
By <ref>,
the optimal potential when the domain is the two-dimensional torus ^2 also exhibits singularities.
In all the examples presented hereafter, these are line singularities.
In order to approximate the optimal potential in the numerical experiments presented in this section,
we use the steepest descent approach presented in <ref> with 150 × 150 discretization points.
We begin in <ref> by considering cases where
the reference dynamics does not suffer from metastability.
The gain in asymptotic variance provided by importance sampling is small in these settings.
Then, in <ref>, we consider a setting where the reference measure is multi-modal,
for which importance sampling leads to a considerable decrease in asymptotic variance.
We consider the case where the potential is V(x) = 0 and the observable is f(x) = sin(x_1) + sin(x_2).
This observable has average zero not only with respect to μ,
but also with respect to the restrictions of this measure to the subsets [-π/2, π/2] × [-π/2, π/2],
[-π/2, π/2] × [π/2, 3π/2], [π/2, 3π/2] × [-π/2, π/2],
and [π/2, 3π/2] × [π/2, 3π/2]
which together form a partition of ^2.
Here we identify subsets of ^2 with their image under the quotient map ^2 →^2.
Interestingly,
the total potential V+U corresponding to the optimal biasing potential exhibits singularities precisely at the boundaries between these regions,
effectively dividing the state space into four separate regions;
see <ref>.
It appears clearly from the right panel in the same figure that,
in agreement with <ref>,
the solution to the corresponding Poisson equation (<ref>) is affine by parts,
with discontinuities of the first derivative at singularities of V+U.
The reduction in asymptotic variance corresponding to the optimal potential in this case is only about 19%.
We now present an example with a non-uniform reference distribution μ.
In this example, we consider that the potential and observable are given by
V(x) = exp(cos(x_1) sin(x_2) + 1/5cos(3x_1)),
f(x) = sin(x_1 + cos(x_2))^3.
The potential V and observable f are illustrated respectively in the top left and right panels of <ref>.
The corresponding optimal total potential V+U,
together with the associated solution to the Poisson equation (<ref>),
are depicted in the bottom left and right panels respectively.
Once again, it appears that the optimal potential divides the domain into two separate regions where the averages of f are the same.
The reduction in asymptotic variance obtained by employing the perturbed dynamics (<ref>) is about 20%.
To conclude this section,
we present an example where the target probability distribution is multimodal,
in which case a considerable reduction of the asymptotic variance can be achieved.
We consider the case where V(x) = 2 cos(2 x_1) - cos(x_2)
and f(x) = sin(x_1).
The potential V(x) has two global minima located at (π/2, 0) and (-π/2, 0),
and the observable f(x) takes different values when evaluated at these points.
The optimal total potential V+U is illustrated in <ref>.
We observe two line singularities
which effectively divide the domain into two separate regions where the average of f is equal to I = 0.
The reduction in asymptotic variance obtained by employing the perturbed dynamics (<ref>) is about 86%.
The numerical values of the asymptotic variances for the examples considered in this section and different choices of U are collated in <ref>.
The asymptotic variances corresponding to the simple biasing with U = - θ V with optimal θ are also presented.
This approach is found to perform quite well in the multimodal setting of <ref>.
In all the examples presented in this subsection,
the optimal biasing potential effectively partitions the domain into several regions that suffice for the estimation of I.
It is natural to wonder whether this observation holds true in general:
is it always the case that,
when such partitioning of the domain occurs,
averages of the observable with respect to the corresponding conditioned measures coincide with the target average I?
We gave in <ref> a positive answer to this question in the one-dimensional setting.
Although we are not able to provide an equally rigorous answer in the multi-dimensional setting,
we motivate hereafter our belief that the answer is also positive in this case.
To this end, suppose that U_* partitions the domain into a number of regions,
corresponding to the connected components of {U_* < ∞}.
Suppose also that there exists an ensemble (U_ε)_ε > 0 of smooth biasing potentials such that
σ^2_f[U_ε] →σ^2_f[U_*] and ^-V-U_ε→^-V-U_* in L^∞(^d) as ε→ 0.
In particular, it holds under this assumption that U_ε(x) → U_*(x) for almost all x ∈{U_* < ∞}.
It is well known, see e.g. <cit.>,
that the average escape time from a potential well for the overdamped Langevin dynamics (<ref>) scales exponentially with respect to the height of the potential barrier.
Therefore, for very small ε,
it would take a very long time for the dynamics to visit all the regions of the state space.
In these conditions,
the asymptotic variance would be very large,
unless the averages of the observable with respect to the probability measure μ conditioned to each of the regions happen to coincide.
More precisely, if σ^2_f[U_ε] does not diverge as ε→ 0,
then the conditional averages in all the regions must necessarily coincide.
§.§ Minimizing the asymptotic variance for a class of observables
In this section,
we illustrate the approach proposed in <ref>,
first for a one-dimensional example and then for a two-dimensional example.
We consider the same potential as in <ref>,
i.e. V(x) = 5 cos(2x),
and a set-up similar to that of <ref>.
Specifically, the observables
and associated weights,
denoted by (λ_j)_1 ≤ j ≤ J in (<ref>),
are given by the first J = 21 eigenpairs of the operator (- + ℐ)^-1,
equipped with periodic boundary conditions on the space of mean-zero functions with respect to the Lebesgue measure.
The associated Gaussian random field f is stationary,
in the sense that the covariance cov(f(x_1), f(x_2)) depends only on the difference x_1 - x_2.
The optimal potential in this case is illustrated in the left panel of <ref>.
In contrast with the examples of <ref>,
the optimal potential is smooth and, therefore, more easily usable in an MCMC scheme.
The average asymptotic variance σ^2[U] given in (<ref>) is reduced by a factor equal to about 900.
In the right-panel of <ref>,
we illustrate the optimal potential when the observables are instead the eigenfunctions
of ^V (- + ℐ)^-1^-V equipped with periodic boundary conditions,
which is precisely the setting considered in <ref>.
In this case, the optimal potential is indeed V+U = 0,
in agreement with the latter result.
The average asymptotic variance σ^2[U] is reduced by a factor equal to about 700.
We consider the same potential as in <ref>,
i.e. V(x) = 2 cos(2 x_1) - cos(x_2).
For the observables and corresponding weights in (<ref>),
we take the eigenpairs of the operator (- + ℐ)^-1 with periodic boundary conditions on the space of mean-zero functions with respect to the Lebesgue measure.
The eigenfunctions are of the form
cos(m x_1) cos(n x_2),
cos(m x_1) sin(n x_2),
sin(m x_1) cos(n x_2),
sin(m x_1) sin(n x_2).
We consider all the eigenpairs with m ≤ 4 and n ≤ 4.
The optimal potential V + U_* in this case is depicted in <ref>,
together with the initial potential V.
We observe that the potential has been flattened in the direction x.
The resulting reduction in the average asymptotic variance is about 70%.
§ CONCLUSIONS AND PERSPECTIVES FOR FUTURE WORKS
In this work,
we considered an importance sampling method based on the overdamped Langevin dynamics in a perturbed potential
and present a novel approach for constructing the biasing potential.
Under appropriate assumptions,
this potential is optimal,
in the sense that it leads to the minimum asymptotic variance
when employed for calculating the average of one or a class of observables with respect to the target probability measure.
The optimal biasing potential is explicit in dimension 1,
and may be approximated by steepest descent in higher dimensions.
We demonstrated the performance of the method by means of numerical experiments in dimensions 1 and 2.
In the multimodal setting, in particular,
using the optimal importance distribution enables a considerable reduction in asymptotic variance.
Finally, our numerical experiments show that,
while minimizing the asymptotic variance for just one observable leads to singularities in the potential,
targeting a number of observables simultaneously leads to smooth potentials which can more easily be employed in numerical schemes.
A drawback of the proposed methodology is that the construction of the optimal biasing potential relies on an iterative method which,
at each step, requires the solution of a Poisson equation.
While feasible in low dimension,
this approach is computationally too costly in a high-dimensional setting.
A possible approach in this case is to
reduce the dimension of the problem by requiring that the biasing potential
is a function of only a few well-chosen degrees of freedom (so-called collective variables),
which ideally capture the metastable behavior of the dynamics.
This corresponds to the setting of free energy computation <cit.>,
and suggests to consider the variance as a functional of some free energy,
which particularly makes sense when the observable under investigation itself depends only on the collective variables.
Investigation of this approach will be the subject of future work.
Another direction for future work would be to investigate whether a similar approach can be employed to minimize the asymptotic variance of estimators based on discrete-time MCMC schemes using overdamped Langevin dynamics.
We expect the resulting optimal biasing potentials in that case to be close to those considered here.
§ TECHNICAL AUXILIARY RESULTS
In this section,
we collect technical auxiliary results used in <ref>.
Consider the setting where = in dimension d=1 and V = 0,
with the observable f [-π, π] → given by
f(x) =
(x) if x≥π/2,
0 otherwise.
where the function is defined in (<ref>).
Here we identify [-π, π] with its image under the quotient map →.
In this case I = 0 and we have the following:
* If U is such that ^-U = 1_[-π/4, π/4],
where 1_S is the indicator function of the set S,
then it holds that f(X^n) = I and ^U(X^n) with probability 1 for X^n ∼μ_U,
and so
s^2_f[U] = 0.
This is in agreement with the first item in <ref>.
In this particular case, 0 is not only the infimum but also the minimum of the asymptotic variance over 𝒰.
* The variance in (<ref>) is given by
s^*_f := ( 1/2π∫_f-I)^2 = 1/4.
* The potential U_*^ iid in (<ref>) is given by U_*^ iid = - log| f |.
If X^n ∼μ_U_*^ iid, then the random variable f(X^n) is equal to either -1 and 1,
each with probability 1/2,
and the random variable (^U)(X^n) is equal to 1 almost surely.
Therefore, the associated asymptotic variance is given by
s^2_f[U_*^ iid] = 1.
This equation can also be obtained from (<ref>).
We observe that s^2_f[U_*^ iid] > s^*_f,
which is consistent with the discussion in <ref>.
* Let U_ε := - log( | f | + ε),
which may be viewed as a discontinuous but bounded regularization of U_*^ iid.
Then, for X^n ∼μ_U_ε,
using the notation w. p. to mean “with probability”,
we have that
(f ^U_ε)(X^n) =
1/1 + ε w. p. 1 + ε/2 + 4 ε
0 w. p. 2ε/2 + 4 ε
- 1/1 + ε w. p. 1 + ε/2 + 4 ε,
(^U_ε)(X^n) =
1/1 + ε w. p. 1 + ε/2 + 4 ε
1/ε w. p. 2ε/2 + 4 ε
1/1 + ε w. p. 1 + ε/2 + 4 ε.
It follows that the variance of (f ^U_ε)(X^n) is given by
1 + ε/1 + 2 ε(1/1 + ε)^2 = 1/(1 + 2 ε)(1 + ε),
and that [^U_ε(X^n)] = 2/1 + 2 ε.
Therefore, by Slutsky's lemma,
or from Equation (<ref>),
we obtain
that
s^2_f[U_ε] = 1 + 2ε/4 + 4 ε.
We observe that s^2_f[U_ε] → s^*_f in the limit as ε→ 0.
This example shows that the biasing potential U_*^ iid is sometimes suboptimal in 𝒰_0;
here we constructed a regularized biasing potential associated with a smaller asymptotic variance than that associated with U_*^ iid.
Furthermore, this example illustrates that the quantity s^*_f is not in general a lower bound on the asymptotic variance
over the set of biasing potentials in 𝒰.
Suppose that <ref> is satisfied, that = and that X_0 ∼μ_U.
Then there exists a unique solution ϕ_U in L^2_0(μ_U) to (<ref>)
and it holds that
√(N)( μ^N_U(f) - I)
(0, σ^2_f[U]),
where σ^2_f[U] is given by (<ref>).
The assumptions that = and X_0 ∼μ should be viewed as technical;
it should in principle be possible to relax them.
We begin by showing the existence and uniqueness of a solution in L^2_0(μ_U) to the Poisson equation (<ref>).
To this end,
we recall that, under <ref>,
^t ℒ_U_ℬ(L^2_0(μ_U))≤^- R[U] t,
where ℬ(L^2_0(μ_U)) is the Banach space of continuous linear operators on L^2_0(μ_U) and R[U] is the Poincaré constant in (<ref>) associated with μ_U;
see e.g. <cit.> and <cit.>.
Therefore,
the Neumann series ∑_n=0^∞^nτℒ_U is convergent in L^2_0(μ_U),
which implies that ℐ - ^τℒ_U is invertible with inverse (ℐ - ^τℒ_U)^-1 equal to the series.
Therefore,
there exists a unique solution ϕ_U ∈ L^2_0(μ_U) to (<ref>).
The estimator (<ref>) may be rewritten as
μ_U^N(f)
= I + ∑_n=0^N-1 g(X_n τ)/∑_n=0^N-1 (^U)(X_n τ),
g := (f-I) ^U.
The key idea
in order to understand the asymptotic behavior of the numerator in (<ref>) is to rewrite the sum as
∑_n=0^N-1 g(X_n τ)
= ∑_n=0^N-1( ( ℐ - ^τℒ_U) ϕ_U )(X_n τ)
= ∑_n=0^N-1( ϕ_U(X_(n+1)τ) - ^τℒ_Uϕ_U(X_nτ) )
- ϕ_U(X_Nτ) + ϕ_U(X_0).
This approach dates back to the work of Kipnis and Varadhan <cit.>.
The first term is a sum of uncorrelated, identically distributed random variables with mean zero and variance
γ^2_f[U] :=
( |ϕ_U(X_τ) - ^τℒ_Uϕ_U(X_0) |^2 )
= ∫_^d(( ^τℒ_U|ϕ_U|^2 )(x) - |( ^τℒ_Uϕ_U )(x) |^2) μ_U(x̣)
= ∫_^d( |ϕ_U(x)|^2 - |( ^τℒ_Uϕ_U )(x) |^2 ) μ_U(x̣),
where we used the invariance of μ_U by the dynamics with generator ℒ_U for the first term in the last integral.
Since ϕ_U is a solution to (<ref>),
it holds that ^τℒ_Uϕ_U = ϕ_U - g,
which by substitution gives that
γ^2_f[U] = ∫_^d 2ϕ_U g - g^2 μ̣_U.
Using an approach similar to that in <cit.>,
we can show that the conditions of the martingale central limit theorem <cit.>
(see also <cit.> for a detailed pedagogical proof) are satisfied,
and so it holds that
1/√(N)∑_n=0^N-1( ϕ_U(X_(n+1)τ) - ^τℒ_Uϕ_U(X_nτ) ) 𝒩(0, γ^2_f[U]).
Hence, since it is clear in the setting where = that
1/√(N)( ϕ_U(X_Nτ) - ϕ_U(X_0) )
0,
it follows from Slutsky's lemma that
1/√(N)∑_n=0^N-1 g(X_n τ) 𝒩(0, γ^2_f[U]).
A similar approach, based on the Poisson equation
- ℒ_U ψ_U = ( ^U - Z/U),
of which the right-hand side is in L^2_0(μ_U) by the assumption that =,
can be employed to understand the asymptotic behavior of the denominator in (<ref>).
Specifically, it holds that
1/N∑_n=0^N-1(^U(X_n τ) - Z/U)
= 1/N∑_n=0^N-1( ψ_U(X_(n+1)τ) - ^τℒ_Uψ_U(X_nτ) )
- ψ_U(X_Nτ) + ψ_U(X_0)/N.
An explicit calculation,
using that the first term on the right-hand side is a sum of uncorrelated, identically distributed random variables,
gives that the variance of the right-hand side converges to 0 in the limit as N →∞,
implying the convergence
1/N∑_n=0^N-1^U(X_n τ)Z/U.
The proof can then be concluded by using Slutsky's lemma once more.
Suppose that <ref> is satisfied and that δ U ∈(^d),
and let ϕ_U + εδ U denote the solution to the Poisson equation (<ref>) posed in L^2_0(^-V-U-εδ U).
Then
∇ϕ_U + εδ U - ∇ϕ_U/ε∇ψ_U,δ U in L^2(μ_U),
where ψ_U,δ U denotes the unique solution in H^1(μ_U) ∩ L^2_0(μ_U) to
-ℒ_Uψ_U,δ U = (f-I) ^U δ U - ∇ (δ U) ·∇ϕ_U
= - ^U+V∇·(^-U-Vδ U ϕ_U).
By integration by parts,
which is allowed since ϕ_U ∈ C^∞(^d) and δ U ∈(^d),
we can check that the right-hand side of (<ref>) is indeed mean zero with respect to μ_U:
- ∫_^d^U+V∇·(^-U-Vδ U ϕ_U) μ̣_U
= 1/U∫_^d∇·(^-U-Vδ U ϕ_U) x̣
= 0.
Therefore, there indeed exists a unique distributional solution in H^1(μ_U) ∩ L^2_0(μ_U) to (<ref>) by the Lax–Milgram theorem.
Between the Poisson equations (<ref>) and (<ref>),
both the operator and the right-hand side differ.
We begin by rewriting
ℒ_U + εδ U = ℒ_U - ε∇ (δ U) ·∇.
Let ψ_ε = ε^-1 (ϕ_U+εδ U - ϕ_U).
It holds that
-ℒ_U + εδ Uψ_ε
= (f-I) ^U + εδ U - ^U/ε - ∇ (δ U) ·∇ϕ_U.
The right-hand side is mean zero with respect to μ_U+εδ U by construction,
and so by the Lax–Milgram theorem there exists a unique distributional solution in H^1(μ_U+εδ U) ∩ L^2_0(μ_U+εδ U) to (<ref>),
which coincides with ψ_ε up to an additive constant.
Subtracting (<ref>) from (<ref>),
we deduce that
-ℒ_U + εδ U (ψ_ε - ψ_U,δ U)
= - (ℒ_U - ℒ_U + εδ U) ψ_U,δ U
+ (f-I) (^U + εδ U - ^U/ε - ^U δ U )
=: ζ_ε.
The second term on the right-hand side converges to 0 in L^2(μ_U) in the limit as ε→ 0,
as does the first term in view of (<ref>).
By the Holley–Stroock theorem,
the probability measure μ_U+εδ U satisfies the Poincaré inequality (<ref>) with a constant R[U+εδ U]
that converges to R[U] in the limit ε→ 0.
Consequently, we deduce from the standard stability estimate (<ref>) that
∇ (ψ_ε - ψ_U,δ U)_L^2(μ_U+εδ U)≤ζ_ε_L^2(μ_U+εδ U)/R[U+εδ U].
Since the right-hand side converges to 0 in the limit ε→ 0,
so must the left-hand side,
which leads to the convergence ∇ (ψ_ε - ψ_U,δ U)_L^2(μ_U)→ 0
given the equivalence between the norms of L^2(μ_U) and L^2(μ_U+εδ U).
This concludes the proof.
One may wonder whether the statement (<ref>) can be strengthened to
ϕ_U + εδ U - ϕ_U/εψ_U,δ U in H^1(μ_U).
The answer to this question is negative.
Indeed, assume by contradiction that (<ref>) holds.
Then in particular ϕ_U + εδ U - ϕ_U → 0 in L^2(μ_U) in the limit as ε→ 0 and so
∫_^dϕ_U + εδ U - ϕ_U/ε μ̣_U = 1/ε∫_^dϕ_U + εδ Uμ̣_U
= 1/Z[U]∫_^dϕ_U+εδ U( ^-V-U - ^-V-U- εδ U/ε)
∫_^dϕ_U δ U μ̣_U,
where we used that ϕ_U and ϕ_U + εδ U are mean-zero with respect to ^-V-U and ^-V-U-εδ U,
respectively.
This is a contradiction because (<ref>) implies that
∫_^dϕ_U + εδ U - ϕ_U/εμ̣_U∫_^dψ_U,δ Uμ̣_U = 0,
and so (<ref>) does not hold.
It is, however, simple to show that ϕ_U + εδ U→ϕ_U in L^2(μ_U) in the limit as ε→ 0.
Additionally, it holds by <ref> and the Poincaré inequality that
ϕ_U + εδ U - ϕ_U/ε - ∫_^dϕ_U + εδ U - ϕ_U/ε μ̣_U ψ_U,δ U in H^1(μ_U),
but these statements are not useful for our purposes in this paper.
Since ψ_U,δ U is a weak solution to (<ref>),
it holds for every δ W ∈(^d) that
∫_^d∇ψ_U,δ W·∇ψ_U,δ U d μ_U
= ∫_^dψ_U,δ W( - ^U+V∇·(^-U-Vδ U ϕ_U) ) μ̣_U
= ∫_^dδ U ϕ_U ·∇ψ_U,δ W μ̣_U,
where integration by parts is justified because δ U ∈(^d).
This equality, where the roles of δ U and δ W can be reversed,
is useful in the proof of <ref> below.
In dimension d = 1,
it follows from (<ref>) that
ψ_U,δ U' = δ U ϕ_U' + C_[U,δ U] ^V+U,
for some constant C_[U, δ U] such that ψ_U,δ U' ∈ L^2(μ_U).
Clearly C_[U, δ U] = 0.
When =,
we obtain the value of C_ [U,δ U] by requiring periodicity,
that is
0 = ∫_ψ_U,δ U'
= ∫_δ_U ϕ_U' + C_[U, δ U] ∫_^V+U,
which leads to
C_[U, δ U]
= - ∫_δ U ϕ_U'/∫_^V+U.
We note that this formula may also be obtained by considering the differential of (<ref>) viewed as a functional of U,
an approach which reveals that C_[U, δ U] = Ạ_[U] ·δ U.
§ CONNECTION BETWEEN THE CONSTANTS A
In this section,
we discuss the links between the constants defined in (<ref>) and (<ref>).
Connection between A_ and A_.
The constant A_^* = A_ is recovered as a limit of A^*_ for an increasingly large torus.
More precisely, it holds that
A_ =
lim_L →∞∫_-L^L F ^V+U/∫_-L^L^V+U.
Indeed, for any ℓ > 0 and L > ℓ,
it holds that
∫_-L^L (F-A_) ^V+U/∫_-L^L^V+U
= ∫_[-L, -ℓ) ∪ (ℓ,L] (F-A_) ^V+U/∫_[-L, -ℓ) ∪ (ℓ,L]^V+U∫_[-L, -ℓ) ∪ (ℓ,L]^V+U/∫_[-L, L]^V+U
+
∫_[-ℓ, ℓ] (F - A_) ^V+U/∫_[-ℓ, ℓ]^V+U∫_[-ℓ, ℓ]^V+U/∫_[-L, L]^V+U.
The right-hand side is a convex combination of the averages of F- A_ restricted to the sets [-L, -ℓ) ∪ (ℓ,L], for the first term,
and [-ℓ, ℓ], for the second term.
In the proof of <ref>,
we proved that
∫_^U+V = ∞.
Therefore, since F is uniformly bounded,
the second summand on the right-hand side of (<ref>) converges to 0 in the limit as L →∞,
and so
lim sup_L →∞*∫_-L^L F ^V+U/∫_-L^L^V+U - A_
=
lim sup_L →∞*∫_[-L, -ℓ) ∪ (ℓ,L] (F - A_) ^V+U/∫_[-L, -ℓ) ∪ (ℓ,L]^V+U≤sup_x≥ℓF(x) - A_.
Since lim_x→∞ F(x) = A_ by definition of A_ in (<ref>),
the right-hand side of this equation can be made arbitrarily small by taking ℓ sufficiently large,
and so the limit (<ref>) follows.
Connection between A^*_ and A^*_.
The constant A_^* = A_ coincides with
lim_ℓ→∞sup{ A ∈ : ∫_-ℓ^ℓ (F-A) ≥ 0 }.
Indeed, since lim_|x| →∞ F(x) = A_,
there exists for any ε > 0 a constant ℓ_ε > 0 such that
∀ℓ≥ℓ_ε, ∫_-ℓ^ℓ (F - A_ + ε) > 0 and ∫_-ℓ^ℓ (F - A_ - ε) < 0.
Therefore, for all ℓ≥ℓ_ε,
the supremum in (<ref>) is contained in the interval [A_ -ε, A_ + ε].
Since ε was arbitrary, the claim is proved.
§ SECOND VARIATION OF THE ASYMPTOTIC VARIANCE
Since the method we propose in <ref> relies on a steepest descent for the asymptotic variance
viewed as a functional of U,
it is natural to wonder whether this functional is convex,
in order to provide guarantees on the convergence of the method.
We provide a partial answer to this question in <ref> and <ref> below.
Specifically, we prove that the asymptotic variance is convex when the domain is the one-dimensional real line but possibly non-convex when the domain is .
We have not managed to prove or rule out the convexity of the asymptotic variance in the multi-dimensional setting.
We emphasize that the convexity of the asymptotic variance in the case where the domain is
does not imply the uniqueness (up to an additive constant) of the minimizer.
The most straightforward example is that of the constant observable,
in which case the asymptotic variance is equal to 0 for any smooth biasing potential U.
Suppose that <ref> is satisfied and let ϕ_U be the solution to the Poisson equation as in <ref>.
Then, for all δ U, δ W ∈(^d),
it holds that
1/2(̣σ̣^2_f[U] ·δ U) ·δ W
= U/Z^2∫_^dδ U_0 δ W_0 ( *∇ϕ_U^2 + ∫_^d*∇ϕ_U^2 μ̣_U) ^-V-U
-2 U/Z^2∫_^d( ∇ψ_U,δ U_0 - δ U_0 ∇ϕ_U ) ·( ∇ψ_U,δ W_0 - δ W_0 ∇ϕ_U ) ^-V-U,
where, for a perturbation δ X ∈{δ U, δ W},
δ X_0 := δ X - μ_U(δ X) , μ_U(δ X) := ∫_^dδ X μ̣_U.
and ψ_U,δ X_0∈ H^1(μ_U) ∩ L^2_0(μ_U) is the solution to (<ref>) with δ U = δ X_0.
In addition, the second term in (<ref>) is zero in dimension 1 when =,
and so the asymptotic variance σ^2_f[U] is a convex functional in this case.
We begin by rewriting the expression (<ref>) as
1/2σ̣^2_f[U] ·δ U
= 1/Z^2∫_^d(Uδ U - ∫_^dδ U ^-V-U) *∇ϕ_U^2 ^-V-U
= Z[U]/Z^2∫_^dδ U *∇ϕ_U^2 ^-V-U
- (∫_^dδ U ^-V-U) σ^2_f[U]/2U
=: T_1[U;δ U] + T_2[U;δ U].
Using the chain rule, we have
Ṭ_1[U;δ U] ·δ W
= -1/Z^2∫_^dδ W ^-V-U∫_^dδ U*∇ϕ_U^2 ^-V-U
+ lim_ε→ 0U/ε Z^2∫_^dδ U (*∇ϕ_U + εδ W^2 - *∇ϕ_U^2) ^-V-U
- U/Z^2∫_^dδ U δ W *∇ϕ_U^2 ^-V-U.
Similarly, for the second term we obtain
Ṭ_2[U;δ U] ·δ W
= 1/Z^2∫_^dδ U δ W ^-V-U∫_^d*∇ϕ_U^2 ^-V-U
- ∫_^dδ U ^-V-U(σ^2_f[U]/2U) ·δ W.
The functional derivative in the last term on the right-hand side is calculated as in the proof of <ref>;
specifically,
(σ^2_f[U]/2U) ·δ W
= (1/Z^2∫_^dϕ_U (f-I) ^-V) ·δ W
= 1/Z^2∫_^dδ W |ϕ_U |^2 ^-V-U.
By <ref> and the fact that δ U ∈(^d),
we have that
1/ε∫_^dδ U (*∇ϕ_U + εδ W^2 - *∇ϕ_U^2) ^-V-U
=
∫_^dδ U
(∇ϕ_U + εδ W - ∇ϕ_U/ε)
·(∇ϕ_U + εδ W + ∇ϕ_U)
^-V-U
2∫_^dδ U ∇ψ_U,δ W·∇ϕ_U ^-V-U.
Collecting all the terms,
we obtain
1/2(̣σ̣^2_f[U] ·δ U) ·δ W
= -1/Z^2∫_^dδ U ^-V-U∫_^dδ W*∇ϕ_U^2 ^-V-U
-1/Z^2∫_^dδ W ^-V-U∫_^dδ U*∇ϕ_U^2 ^-V-U
+ 2U/Z^2∫_^dδ U ∇ψ_U,δ W·∇ϕ_U ^-V-U
+ U/Z^2∫_^dδ U δ W ( - *∇ϕ_U^2 + ∫_^d*∇ϕ_U^2 μ̣_U) ^-V-U.
By rewriting the last term on the right-hand side as
U/Z^2∫_^dδ U δ W ( *∇ϕ_U^2 + ∫_^d*∇ϕ_U^2 μ̣_U) ^-V-U
- 2U/Z^2∫_^dδ U δ W *∇ϕ_U^2 ^-V-U,
and substituting δ U δ W = δ U_0 δ W_0 + δ U μ_U(δ W) + δ W μ_U(δ U) - μ_U(δ U) μ_U(δ W) in the first term of the latter expression,
the second variation may be further simplified to
1/2(̣σ̣^2_f[U] ·δ U) ·δ W
= U/Z^2∫_^dδ U_0 δ W_0 ( *∇ϕ_U^2 + ∫_^d*∇ϕ_U^2 μ̣_U) ^-V-U
+ 2 U/Z^2∫_^dδ U ∇ϕ_U · (∇ψ_U,δ W - δ W ∇ϕ_U) ^-V-U.
Using (<ref>),
both for ψ_U,δ U and ψ_U,δ W,
we obtain
∫_^dδ U ∇ϕ_U ·(∇ψ_U,δ W - δ W ∇ϕ_U) ^-V-U
= - ∫_^d( ∇ψ_U,δ U - δ U ∇ϕ_U ) ·( ∇ψ_U,δ W - δ W ∇ϕ_U ) ^-V-U.
From (<ref>),
it is simple to see that ∇ψ_U,δ U = ∇ψ_U,δ U_0 + ∇ψ_U, μ_U(δ U) = ∇ψ_U,δ U_0 + μ_U(δ U) ∇ϕ_U.
Similarly, ∇ψ_U,δ W = ∇ψ_U,δ W_0 + μ_U(δ W) ∇ϕ_U.
Substituting these expressions in (<ref>) leads to the claimed result (<ref>).
One-dimensional setting.
In dimension 1 when =,
it holds that ψ_U,δ W' = δ W ϕ_U' by (<ref>),
and so the second term in (<ref>) cancels out,
which proves the last part of the statement.
Since all the terms on the right-hand side of (<ref>) depend only on δ U_0,
the second variation is invariant under vertical shift of δ U,
in the sense that, formally,
∀ C ∈, (σ̣^2_f[U] · (δ U + C)) · (δ U + C) = (̣σ̣^2_f[U] ·δ U) ·δ U.
This property had to hold a priori because σ^2_f[U] is itself invariant under addition of constants to U,
and so we could have assumed that μ_U(δ U) = 0 from the beginning of the proof without loss of generality.
The optimal biasing potential is known explicitly by <ref> in the one-dimensional setting,
so <ref> is of little direct importance in this case.
Nonetheless, the result provides understanding for the numerical experiments using the formula of the directional derivative.
The asymptotic variance is not a convex functional when = and d = 1.
Indeed, we construct in this remark a potential V, a smooth function ϕ,
and a direction δ U such that
the second variation of the asymptotic variance σ^2_f for the observable f = -ℒϕ
(with ℒ the generator ℒ_U given in (<ref>) with U = 0)
in the direction δ U is negative when evaluated at the biasing potential U = 0.
In the setting we consider, since ϕ is the solution to (<ref>) when U = 0,
it holds by <ref> that
ψ_U=0,δ U_0' = δ U_0 ϕ'
- ( ∫_δ U_0 ϕ'/∫_^V)^V.
Here δ U_0 := δ U - μ(δ U).
Therefore, by substitution in (<ref>) we have that
1/2(σ̣^2_f[0] ·δ U) ·δ U
= 1/Z( ∫_δ U_0^2 ( *ϕ'^2 + ∫_*ϕ'^2μ̣) ^-V
- 2 (∫_δ U_0 ϕ' )^2/∫_^V),
The right-hand side of this equation is not always positive.
In order to show this,
consider the case where
δ U_0 = ϕ' ^V.
Note that δ U_0 indeed has average 0 with respect to μ
since
∫_δ U_0 ^-V = ∫_ϕ' = 0.
Then, we have
Z/2(σ̣^2_f[0] ·δ U) ·δ U
= ∫_ϕ'^4 ^V + ∫_*ϕ'^2 ^-V/∫_^-V∫_*ϕ'^2 ^V
- 2 (∫_ϕ'^2 ^V)^2/∫_^V.
Assume that ϕ = ϱ_ε⋆ h + C is a regularization of a hat function h→ given on the interval [-π, π],
which we identify with its image under the quotient map →,
by
h(x) :=
1 - x, x < 1,
0 otherwise,
with ϱ_ε the standard mollifier (<ref>) and C∈ the constant such that ϕ has average 0 with respect to μ.
Then, letting ν denote the probability measure with Lebesgue density proportional to ^V
and ℐ = [-1, 1],
we obtain that, in the limit as ε→ 0,
∫_ϕ'^4 ν̣→ν(ℐ),
∫_*ϕ'^2 μ̣→μ (ℐ),
∫_*ϕ'^2 ν̣→ν (ℐ).
Therefore, it holds in this limit that
Z/2(̣σ̣^2_f[0] ·δ U) ·δ U
→( ν(ℐ) + μ(ℐ) ν(ℐ) - 2 ν(ℐ)^2 ) ∫_^V .
Now let V(x) = K cos(x) for all x.
In the limit as K →∞, it holds that μ(ℐ) → 0 and ν(ℐ) → 1.
We conclude that, for sufficiently large K and sufficiently small ε,
the second variation of the asymptotic variance in direction δ U is negative.
§ NUMERICAL DISCRETIZATION OF THE POISSON EQUATION
We consider here the case where the domain is ^2 for simplicity,
noting that the method may be generalized to any spatial dimension.
In order to numerically solve the Poisson equation (<ref>),
we use a finite difference approach on a grid of size N × N.
For a given δ>0, the discretization nodes are arranged linearly according to
x_ℓ := (- π + i , - π + j ) ∈^2,
j = ⌊ℓ - 1/N⌋, i = ℓ - 1 - j N,
= 2π/N,
for ℓ∈{1, …, N^2}.
Note that the indices i and j each run from 0 to N-1;
the largest value of either coordinate over the set of discretization nodes is π -,
which is sufficient given that -π and π coincide under the quotient map ^2 →^2.
Before we present the method,
we introduce additional notation.
We denote by Π_N the discretization operator which associates to a function its values at the grid points (<ref>),
and for a function h^2 →,
we write h = Π_N h ∈^N^2.
The notation exp.( h) refers the vector obtained by applying the exponential function element-wise to h,
and ( h) refers to the diagonal matrix with diagonal entries given by h.
The notation 1 ∈^N^2 refers to a column vector containing only ones.
We also introduce the one-dimensional backward and forward difference operators,
which act on vectors in ^N:
D_ B =
1/[ 1 -1; -1 1; -1 1; ⋱ ⋱; -1 1; ],
D_ F =
1/[ -1 1; -1 1; ⋱ ⋱; -1 1; 1 -1; ].
From these operators,
we construct difference operators along the x and y directions by taking Kronecker products with the ^N× N identity matrix _N:
D_ B^x = _N ⊗ D_ B,
D_ B^y = D_ B⊗_N,
D_ F^x = _N ⊗ D_ F,
D_ F^y = D_ F⊗_N.
We recall that, for two matrices A, B ∈^N × N,
the Kronecker product A ⊗ B is defined as
A ⊗ B = [ a_11 B ⋯ a_1N B; ⋮ ⋱ ⋮; a_N1 B ⋯ a_NN B ].
We denote by ∇_ F h the N^2 × 2 matrix
∇_ F h =
[ D_ F^x h D_ F^y h ].
For a weight function w^2 →,
we introduce the weighted inner product ,_w^N^2×^N^2→ given for g, h∈^N^2 by
g, h_w
= ^2 g^( w) h
= ^2 ∑_ℓ=1^N^2 g_ℓ h_ℓ w( x_ℓ),
with corresponding norm _w.
We include the factor ^2 in this definition so that,
if g and h contain the values taken by continuous functions g and h when evaluated at the discretization points
and w is continuous,
then
g, h_w ∫_^2 g(x) h(x) w(x) x̣.
Finally, let ∇_ F h_w^2 = D_ F^x h_w^2 + D_ F^y h_w^2
and let ∇_ F h^2 denote the N^2 × 1 column vector obtained by taking the squared Euclidean norm of each row of ∇_ F h.
In the remainder of this section,
the notation (<ref>) and corresponding norm are usually employed with the weight function w = ^-V-U and so,
in order to simplify notation, we omit the subscript in this case.
We are now ready to write the discrete formulation of the Poisson equation (<ref>).
For V,U,f^2 →, there exists a unique solution (ϕ_N, I_N) ∈^N^2× to
- L[ ϕ_N; I_N ]
:=
[ - L exp.( U); ^2exp.(- V - U)^ 0 ][ ϕ_N; I_N ]
=
[ (exp.( U)) f; 0 ],
where
L = (exp.( V + U)) D_ B^x (exp.(- V - U)) D_ F^x
+(exp.( V + U)) D_ B^y (exp.(- V - U)) D_ F^y.
The first rows in (<ref>) may be rewritten as
- L ϕ_N = (exp.( U)) ( f - I_N 1),
which resembles the Poisson equation (<ref>).
The last row in (<ref>) may be rewritten as
^2exp.(- V - U)^ϕ_N = 1, ϕ_N = 0.
It expresses the requirement that the vector ϕ_N should be mean-zero with respect to the discrete measure exp.(- V - U).
We use the notation I_N for the scalar unknown in (<ref>)
because solving (<ref>) yields
both an approximate solution to the Poisson equation and an approximation of I=μ(f).
In order to prove the statement,
it is sufficient to show that the homogeneous equation
[ - L exp.( U); ^2exp.(- V - U)^ 0 ][ γ; σ ]
=
[ 0; 0 ]
admits only the trivial solution = ( 0, 0).
We assume by contradiction that (γ, σ) is a nonzero solution.
Then
- L γ + σexp.( U) = 0,
implying that
- L γ, 1_ + σexp.( U), 1_ = 0.
The linear operator on ^N^2 × N^2 induced by L is self-adjoint for the inner product ,_,
because
- g, L h_ = - ^2 g^( D_ B^x (exp.(- V - U)) D_ F^x + D_ B^y (exp.(- V - U)) D_ F^y ) h
= D_ F^x g, D_ F^x h_ + D_ F^y g, D_ F^y h_,
where we used the relation D_ B^ = - D_ F.
Therefore, going back to (<ref>),
we deduce that
0 = - γ, L 1_ + σexp.( U), 1_
= σexp.( U), 1_
= ^2σ 1^exp.(- V).
Therefore σ = 0,
but then L γ = 0 by
(<ref>) and so γ, L γ_ = 0.
By the relation (<ref>),
this implies that D_ F^x γ = D_ F^y γ = 0,
so the vector γ is constant.
The last equation in (<ref>) then implies that γ = 0.
Note that (<ref>) implies that
(L^) = {exp.(- V- U) },
which will be useful in the proof of <ref>.
It is possible to prove the convergence of the solution to (<ref>)
to the exact solution of the Poisson equation (<ref>) in the limit as N →∞.
To this end,
we begin by showing the following Poincaré-like inequality.
Assume that V+U^2 → is uniformly bounded.
Then there exists a constant R_ disc[U] > 0 independent of N such that
∀ g ∈{ h ∈^N^2 : h^exp.(- V - U) = 0 }, ∇_ F g_^2 ≥ R_ disc[U] g_^2.
It is sufficient to show (<ref>) for V+U = 0.
Indeed, assuming that the inequality holds in this particular case and denoting by C a constant
which depends only on V + U and is allowed to change from line to line,
we have that
∀ g ∈^N^2, D_ F^x g, D_ F^x g_ + D_ F^y g, D_ F^y g_
≥ C ( D_ F^x g, D_ F^x g_1 + D_ F^y g, D_ F^y g_1 )
≥ C g - g, g - g_1,
g = g, 1_1/ 1, 1_1 1,
where we employed the equivalence between , _1,
which is given by (<ref>) in the particular case where V + U = 0,
and , _^-V-U,
noting that both constants in this equivalence can be fixed independently of N.
Using this equivalence in the other direction,
we obtain
g - g, g - g_1
≥ C g - g, g - g_≥ C inf_s ∈ g - s 1, g - s 1_
= C g - g, g - g_,
g = g, 1_/ 1, 1_ 1.
Finally, equation (<ref>) when V + U = 0 follows from its one-dimensional counterpart by using a standard tensorization argument (as for the proof of <cit.> for instance).
It only remains to show the one-dimensional inequality
∀ g ∈^N, D_ F g, D_ F g_1
≥ R_ disc[U] g - g, g - g_1,
g = g, 1_1/ 1, 1_1 1,
for a constant R_ disc[U] independent of N for sufficiently large N.
To this end,
we notice that
[] D_ F g, D_ F g_1
= [] D_ B D_ F ( g - g), g - g_1.
The matrix D_ B D_ F is given by
D_ B D_ F =
1/^2[ 2 -1 -1; -1 2 -1; -1 ⋱ ⋱; ⋱ ⋱ -1; -1 -1 2 ].
This is a circulant matrix <cit.> with explicit eigenvalues given by
λ_k = 4/δ^2sin^2 (π k/N) = N^2/π^2sin^2 (π k/N), k= 0, 1, …, N-1.
The minimum eigenvalue of this matrix is λ_0 = 0,
and the associated eigenvector is 1,
to which g - g is orthogonal.
Therefore, equation (<ref>) implies that
∀ g ∈^N, [] D_ F g, D_ F g_1
≥λ_1 [] g - g, g - g_1,
which implies that, for fixed N,
equation (<ref>) holds with constant R_ disc(N) = (N/π)^2 sin^2 (π/N),
which converges to 1 in the limit N →∞.
See also <cit.> for a Poincaré inequality for discrete functions on a bounded interval that are zero at the endpoints.
It would be interesting to show that (<ref>) is satisfied for a constant R_ disc[U] = R_ disc[U](N)
which converges to the Poincaré constant of μ_U in the limit as N →∞.
Our approach does not enable to prove such a statement,
and we do not further investigate this question.
<Ref> implies that L is invertible.
Using <ref>,
we show that L^-1 does not diverge in the limit as N →∞,
in an appropriate norm.
Assume that V,U^2 → are continuous.
Then the matrix L^-1 is bounded uniformly in N ∈ℕ,
for the operator norm induced by the following norm on ^N^2×:
(γ, σ) ↦γ_^-U-V + σ.
The strategy of proof is similar to that used in <cit.>.
Since by the Fredholm alternative ( L) = ( L^)^⊥,
the range of L is given by { h ∈^N^2 : h^exp.(- V - U) = 0 }.
It then follows by <ref> that the following inequality holds for all g ∈( L):
L g g≥
- L g, g
= D_ F^x g^2 + D_ F^y g^2
≥ R_ disc[U] g^2,
and so we deduce that that L^-1≤1/R_ disc[U] over ( L).
Denote by (γ, σ) the solution to
[ - L exp.( U); ^2exp.(- V - U)^ 0 ][ γ; σ ]
=
[ g; s ].
This solution satisfies
- L γ, 1 + σexp.( U), 1 = g, 1,
and since L γ, 1 = γ, L 1 = 0,
this implies
|σ| = | g, 1/exp.( U), 1|≤ g1/exp.( U), 1.
We then deduce that
γ - γ = - L^-1( g - σexp.( U) ),
γ = γ, 1/ 1, 1 1,
and then the last equation in (<ref>) gives γ = s 1 / 1^2.
This leads to the bound
γ ≤γ - γ + γ
≤1/R_ disc[U]( g + σexp.( U))
+ s 1/ 1^2≤ g/R_ disc[U](1 + exp.( U)1/exp.( U), 1) + s 1/ 1^2.
In the limit N →∞, it holds that
1→√(∫_^2^-V-U),
exp.( U)→√(∫_^2^-V+U) ,
exp.( U), 1→∫_^2^-V,
which enables to conclude the proof.
We are now ready to prove the convergence of the solution of the discretized Poisson equation (<ref>) in the limit N →∞.
Suppose that <ref> is satisfied.
Let ϕ denote the exact solution to (<ref>) and let I = μ(f).
Let also (ϕ_N, I_N) denote the solution to the discretized equation (<ref>).
Then it holds that
ϕ_N - Π_N ϕ_ 0,
I_N I.
The proof is an application of the standard Lax equivalence theorem.
We denote the matrix of the linear system (<ref>) by L_N to emphasize its dependence on N.
Convergence follows from the usual argument:
*[ Π_Nϕ - ϕ_N; I - I_N ]≤
C
* L_N [ Π_Nϕ - ϕ_N; I - I_N ]
=
* L_N [ Π_Nϕ; I ]
-
[ Π_N(f ^U); 0 ] 0,
where the norm in this equation is that defined in (<ref>).
The first inequality follows from the stability statement of <ref>,
while the limit follows from the consistency of the discretization,
which is simple to check given that ϕ is a smooth function under <ref>,
and relying on the presence of the factor ^2 in the definition (<ref>).
The main interest of the discretization (<ref>) lies in the following statement,
which may be viewed as a result on the commutation of the discretization and derivative operators.
In order to be more precise,
we denote by (ϕ_N, I_N) the solution to (<ref>) and let
σ^2_f,N[ U] = 2 U/Z_N^2∇_ Fϕ_N_^2,
where {
Z_N := ^2 1^exp.(- V),
U := ^2 1^exp.(- V - U).
.
The following statement shows that the functional derivative of σ^2_f[ U]
has a structure similar to that of σ^2_f[U] given in (<ref>);
it may be viewed as a discretization thereof.
Suppose that V, U ^2 → are uniformly bounded.
The functional derivative with respect to U of σ^2_f,N is given by
1/2σ̣^2_f,N[ U] ·δ U
= U/Z_N^2*δ U, ∇_ Fϕ_N^2 - ∇_ Fϕ_N^2_,
∇_ Fϕ_N^2 := ∇_ Fϕ_N_^2/ U.
Notice that (<ref>) is very similar to the formula (<ref>) for the functional derivative of σ^2_f[U].
The proof mirrors that of <ref>.
In view of (<ref>), we first rewrite
σ^2_f,N[ U]
= 2 U/Z_N^2∇_ Fϕ_N_^2
= - 2 U/Z_N^2ϕ_N, L ϕ_N_
= 2 U/Z_N^2ϕ_N, (exp.( U)) ( f - I_N 1)_
= 2 U/Z_N^2ϕ_N, f - I_N 1_^-V.
Let (ϕ_N^ε, I_N^ε) denote the solution to (<ref>) with U + εδ U in place of U everywhere
and L^ε the corresponding matrix (<ref>).
It is simple to check, using a reasoning similar to (<ref>) as well as the equation L^εϕ_N^ε, 1 = 0,
that the scalar term I^ε_N = I_N is fact independent of the potential U.
Therefore,
we obtain that
- L^εϕ_N^ε
= (exp.( U + εδ U)) ( f - I_N 1).
By definition of the functional derivative,
we then have
1/2σ̣^2_f,N[ U] ·δ U = U·δ U/Z_N^2∇_ Fϕ_N_^2
+ lim_ε→ 0 U/ε Z_N^2ϕ^ε_N - ϕ_N, f - I_N 1_^-V.
The functional derivative in the first term is given by
U·δ U = - δ U, 1_.
For the second term in (<ref>),
we obtain
ϕ^ε_N - ϕ_N, f - I_N 1_^-V = *ϕ^ε_N - ϕ_N, (exp.( U) )( f - I_N 1)_^-V-U
= - ϕ^ε_N - ϕ_N, L ϕ_N_^-V-U
= - L(ϕ^ε_N - ϕ_N), ϕ_N_^-V-U
= - L^εϕ^ε_N, ϕ_N_^-V-U -εδ U
+ L ϕ_N, ϕ_N_^-V-U
- L ϕ^ε_N, ϕ_N_^-V-U
+ L^εϕ^ε_N, ϕ_N_^-V-U -εδ U.
The first two terms in the last expression cancel out,
and after substituting the expressions of L and L^ε
given in (<ref>) in the other two terms,
we obtain
ϕ^ε_N - ϕ_N, f - I_N 1_^-V =
- D_ B^x ( M - M^ε) D_ F^x ϕ^ε_N, ϕ_N_1
- D_ B^y ( M - M^ε) D_ F^y ϕ^ε_N, ϕ_N_1,
where
M - M^ε :=
(exp.(- V - U))
- (exp.(- V - U - εδ U)).
Noting that
lim_ε→ 0 M - M^ε/ε = (exp.(- V - U)) (δ U),
we deduce that
lim_ε→ 0 U/ε Z_N^2ϕ^ε_N - ϕ_N, f - I_N 1_^-V
= U/Z_N^2*δ U, ∇_ Fϕ_N^2_^-V-U.
Combining this equation with (<ref>) and (<ref>),
we deduce (<ref>).
Acknowledgements.
We are grateful to Andrew Duncan and Grigorios Pavliotis for useful discussions and for sharing with us their preliminary calculations on this problem.
The work of MC was supported by the Agence Nationale de la Recherche under grant ANR-20-CE40-0022 (SWIDIMS).
The work of TL, GS and UV was partially funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 810367),
and by the Agence Nationale de la Recherche (ANR) under grants ANR-21-CE40-0006 (SINEQ) and ANR-19-CE40-0010 (QuAMProcs).
|
http://arxiv.org/abs/2307.10190v1 | 20230708141246 | Summary of the 3rd BINA Workshop | [
"Eugene Semenko",
"Manfred Cuntz"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.SR"
] |
1]Eugene Semenko
2]Manfred Cuntz
[1]National Astronomical Research Institute of Thailand (Public Organization)
260 Moo 4, T. Donkaew, A. Maerim, Chiangmai, 50180 Thailand
[2]Department of Physics, University of Texas at Arlington, Arlington, TX 76019, USA
Summary of the BINA Workshop
[
============================
BINA-3 has been the third workshop of this series involving scientists from India and Belgium aimed at fostering future joint research in the view of cutting-edge observatories and advances in theory. BINA-3 was held at the Graphic Era Hill University, 22-24 March 2023 at Bhimtal (near Nainital), Uttarakhand, India. A major event was the inauguration of the International Liquid-Mirror Telescope (ILMT), the first liquid mirror telescope devoted exclusively to astronomy. BINA-3 provided impressive highlights encompassing topics of both general astrophysics and solar physics. Research results and future projects have been featured through invited and contributed talks, and poster presentations.
§ INDO-BELGIAN COLLABORATION IN SPACE AND TIME
Without comprehensive international collaborations, it is difficult to imagine sustainable scientific progress in the modern age. In astronomy and astrophysics, such collaborations enabled the operation of observational facilities in the best places on the ground and in space. In big international cooperations like the European Southern Observatory, we can see how the technology exchange and mobility of human resources promote research on all levels, from universities to international institutions. Especially promising collaborations pertain to India, the world's most populous country according to the United Nations <cit.>, with exceptionally rapid economic growth.
The Belgo-Indian Network for Astronomy and Astrophysics, or BINA, was initialized in 2014 to foster the existing contacts between Indian and Belgian researchers, mostly from the Aryabhatta Research Institute of Observational Sciences (ARIES) and the Royal Observatory of Brussels (ROB), and to expand this collaboration on the nation-wide scale in both countries. The third BINA workshop, which we have the pleasure of summarizing, marks the end of this project. Two previous workshops were held in 2016 in Nainital (India) and 2018 in Brussels (Belgium). We believe that our summary would not be complete without a brief comparison of the third workshop with the two preceding ones. This will help us to better understand BINA's importance and outcome.
The first workshop (BINA-1) took place in Nainital on 15–18 November 2016. According to available statistics <cit.>, 107 astronomers from eight countries participated in the meeting, giving 36 oral talks and presenting 42 posters. Eighty-eight people from twelve partner institutes represented the Indian astronomical community, whereas six Belgian institutions sent ten representatives. The meetings' agenda focused primarily on the instrumentation of the newly commissioned 3.6-m Devastal Optical Telescope (DOT) and on the future of the 4-m International Liquid-Mirror Telescope (ILMT). The scientific talks covered a wide range of subjects, from solar system studies to individual stars, stellar clusters, exoplanets and extragalactic astronomy.
The second BINA workshop (BINA-2) was held two years later, in 2018, in Brussels; it was aimed to further expand the existing collaborations. Despite the significantly smaller number of participants (i.e., 69 registered researchers from seven countries), the conference's scientific programme was rich in oral talks, totalling 44. Furthermore, there were eight poster presentations <cit.>. The scientific programme of the second workshop largely mirrored the agenda of the first meeting, accentuating the scientific application of the Belgo-Indian telescopes. A highly notable aspect of the second workshop's scientific programme was the presence of the review talks.
In terms of participation and the number of oral talks, BINA-3, the final workshop, resembles the previous events, although, fortunately, a significant increase in participation and contributions occurred. Nearly one hundred fifty scientists from eleven countries participated in BINA-3, with the lion's share from India and Belgium. A total of 37 talks (10: invited, 27: contributory) talks were given in the main programme, and 21 contributory talks were given in the solar physics sessions. There have been 81 poster presentations; many of those were led by graduate and undergraduate students.
There is significant progress hiding behind the numbers. Since 2016, the Belgo-Indian network has grown to involve new institutes from both partner countries. The members published numerous scientific papers with results obtained on the Belgo-Indian telescopes. Many of these were based on PhD theses pursued within BINA. The content of these proceedings, during 2016–2023, also reveals that many young researchers changed their affiliation, moving to new places and thus expanding the network of research contacts. Progress in instrumentation and scientific collaboration within BINA and with external institutes worldwide gave new impulses to solar and general physics studies. In general, we can count the significantly increased number of telescopes and instruments as the major indicator of progress achieved within the BINA project. The list of available instruments has been highly influential on BINA-3. In the following sections, we briefly summarize its scientific programme.
§ OBSERVATIONAL TECHNIQUES AND INSTRUMENTATION
Telescopes and their instruments were in the spotlight of all BINA workshops. The ILMT has become the central theme of the current meeting. From a number of oral talks and poster presentations, one could get a comprehensive view of such telescopes' operation principles. It was particularly interesting to find out about the data reduction, calibration and access to the processed images obtained with the ILMT. Numerous results of the first observations with the ILMT, shown mostly in the poster presentations, have demonstrated a wide range of possible scientific applications of zenith telescopes with liquid mirrors. Given the short time that has passed since the beginning of the operation and obtained results, we can confirm that the ILMT has proven its scientific concept and significantly strengthened the observational facilities for the current and future Indo-Belgian projects.
The Indo-Belgian 3.6-m Devastal Optical Telescope (DOT) remains Asia's largest so far fully steerable optical telescope, which has been in operation since 2016. Yet, accurately by the time of BINA-3, a park of Indian telescopes received strengthening with the commissioning of the 2.5-m telescope, which was built by the Advanced Mechanical and Optical Systems (AMOS) in Belgium for the Physical Research Laboratory (PRL) in Ahmedabad and installed at Mt Abu, Rajasthan, India.
The development of new instruments and the upgrade of existing facilities was the central theme of the instrumentation section of the current conference. Notably, by 2028, the TIFR-ARIES Multi-Object Optical to Near-infrared Spectrograph (TA-MOONS) will bring new capabilities useful for the studies of stars in star formation regions, open clusters, and extended sources with DOT. Also, for this telescope, adding the polarimetric mode to the Aries-Devasthal Faint Object Spectrograph & Camera (ADFOSC), the existing device for observations of faint objects, will enable both linear and circular polarimetry. This new regime is of critical importance to the study of processes in star-forming regions, interacting stellar systems, supernovae, active galactic nuclei, and beyond.
A spectropolarimetric mode might be a case to think of for the creators of the PRL Advanced Radial Velocity Abu Sky Search-2 (PARAS-2), a high-resolution spectrograph at the 2.5-m PRL telescope at Mt Abu. This highly stable device has been developed for precise measurements of radial velocities while providing very high spectral resolution. Due to the geographical location of Mt Abu, PARAS-2 can play a critical role in the continuous monitoring of radial velocities for a wide variety of relatively bright objects; however, with a spectropolarimetric mode being implemented (like HARPSpol at the High Accuracy Radial velocity Planet Searcher (HARPS); ), PARAS-2 can take its niche in observations of hot magnetic stars, either within Indo-Belgian collaboration or in third-party projects like MOBSTER <cit.>. (MOBSTER is an acronym for Magnetic OB[A] Stars with TESS: probing their Evolutionary and Rotational properties; it is a collaboration of more than 60 scientists from over the world.) With the completion of a High-Resolution Spectrograph for the 3.6-m Devastal Optical Telescope (DOT-HRS), the astronomical community of ARIES will possess the ability to independently carry out studies in the fields of asteroseismology and stellar abundances. Again, like in the case of PARAS-2, spectropolarimetry with DOT-HRS is expected to increase the list of potential applications of this device and could further expand the ongoing Nainital-Cape survey of pulsating early-type stars <cit.>.
The rising number of telescopes in India poses questions about the most adequate time allocation policies and the optimal distribution of observational proposals between existing astronomical facilities. We found that the analysis of the time allocation for the 3.6-m DOT regarding the last six observational cycles, as presented at the workshop, indicated that it was particularly useful and appropriate for all facilities of ARIES — especially considering that the ILMT has started its operation and the upcoming arrival of the next-generation instruments for the 3.6-m DOT. From our perspective, in addition to the proposed improvements, we would also recommend the organisation of regular (e.g., on a yearly basis) conferences of the telescope's users under the auspices of the Time Allocation Committee (TAC), where the existing and potential applicants would be able to present their proposals or give feedback on the approved or running programmes. Such mini-conferences could be held online, speeding up communication between the TAC and the astronomical community. Naturally, this experience could be applied to other instruments in India and beyond as well.
The theme of small telescopes has been raised in several talks. The Belgium-made High-Efficiency and high-Resolution Mercator Echelle Spectrograph (HERMES), operated at the 1.25-m Mercator telescope in La Palma (Spain), proved its effectiveness in studies of the chemical composition of single and multiple stars. This spectrograph is used for existing bilateral projects. Complimentary opportunities for high-resolution spectroscopy with the 1-m-class telescopes and the perspectives of affordable implementation of adaptive optics on small and moderate-size telescopes have been considered in BINA-3. The interest in these problems highlights the importance of small, properly equipped telescopes for big programmes complementary to missions like the Transiting Exoplanet Survey Satellite (TESS).
§ MAIN PROGRAMME SESSION
BINA provides access to a wide variety of observational facilities located worldwide <cit.>. The observational component mostly determined the agenda of the BINA-3.
Comets, planets, asteroids, and orbital debris were in the third BINA workshop's spotlight, though other topics such as stars, including stellar multiplicity, and compact objects have been discussed. The selection of objects is largely determined by the areas where optical spectroscopy and photometry are most effective with small and medium-sized telescopes. The exception is the study of planetary atmospheres using the method of stellar occultations. Similar techniques require bigger apertures, and being implemented in a 3–6-m class of telescopes can be very beneficial. The 3.6-m DOT is among those few instruments on the planet which have regularly been used for observation of such events <cit.>.
Various instruments available within the Indo-Belgian collaboration promote the comprehensive study of processes occurring in star formation regions and during the ongoing evolution of stars. The efficiency of multi-wavelength observations was demonstrated in the example of the study of the star formation H ii region Sh 2-305. However, this is not a unique case where the Indian telescopes exploring the Universe in optical, radio, and X-ray domains were successfully combined. We cannot pass by the numerous results of the study of massive binary stars, stars with discs and circumstellar envelopes, introduced in the BINA-3 workshop.
Stellar multiplicity runs the golden thread through many talks given in Bhimtal during the workshop. As companions significantly influence stellar lifes at all stages of evolution, proper accounting and evaluation of the companions' properties are crucial. In this regard, work with the catalogues of binary stars or their extensive study within the ongoing or future Indo-Belgian projects must receive high priority. In such programmes, high-resolution optical spectroscopy of binary and multiple stars must take a special place.
Another problem passing through the scientific content of BINA-3 is stellar magnetism. As pointed out in the workshop, magnetic fields are ubiquitous on and beyond the main sequence, with their strengths varying substantially. Magnetic fields are responsible for different kinds of stellar activity and can impact stellar evolution. Besides the theoretical aspects pertaining to the physics of these processes, we would like to attract attention to the lack of observational facilities in the Asian region suitable to direct observations of stellar magnetic fields and processes. The worldwide selection of medium-sized and big telescopes equipped with sensitive spectropolarimetric devices is very limited, and Indian telescopes could fill this gap.
Through the study of chemical composition, one can explore the evolution of individual stars, groups of stars, and the Galaxy at large. The last is the central task of galactic archaeology. Pursuing this task depends on the availability of spectra and proper modelling. Despite the various observational results presented in BINA-3, we find a lack of interactions between the BINA members and groups working, e.g., in the U.S., Sweden or Germany, on the theoretical aspects of abundance analysis. We believe tighter cooperation with the institutes outside of BINA would take the research of stellar abundances to a qualitatively new level.
In contrast to the previous workshops, asteroseismology, a powerful tool for probing stellar interiors and validating stellar parameters, appears underrepresented in BINA-3. (On a lighter note, a superb cultural show successfully compensated for the lack of “music of the stars” in the conference programme.) This fact looks surprising to us as the Belgian groups in Brussels and Leuven are famous for their proficiency in this field.
Apart from galactic archaeology, which deals with the evolution of chemical composition, probing the Galactic structure is another important direction of work within BINA. Even now, after decades of extensive exploration of the Galaxy using different methods, our knowledge of its structure is incomplete. Optical polarimetry helps to reveal the detailed fine structure of dust clouds in the star formation regions or in the areas of young open clusters. Indian astronomers are experienced in this kind of work, and their results, both published <cit.> and presented during BINA-3, deserve special attention. We look forward to further expanding this direction of galactic studies on a new technical level.
§ SOLAR PHYSICS SESSION
The mainframe of the solar physics programme has been the study of small-scale structure, waves, flares as well as coronal mass ejections (CMEs). Science opportunities are often directly associated with instruments such as the Extreme Ultraviolet Imager (EUI) onboard of the Solar Orbiter. The EUI provides a crucial link between the solar surface, on the one hand, and the corona and solar wind, on the other hand, that ultimately shapes the structure and dynamics of the interplanetary medium. Several contributions focused on wave propagation, including their relevance to small-scale structures of the solar chromosphere, transition region and corona, such as flares, spicules and loop systems.
This kind of research considered both observations and theoretical work, such as ab-initio simulations for standing waves and slow magneto-acoustic waves. Studies of the outer solar atmosphere also utilized the Interface Region Imaging Spectrograph (IRIS) and the Atmospheric Imaging Assembly (AIA), both onboard of the Solar Dynamics Observatory (SDO). In alignment with previous studies given in the literature, the potential of spectral lines, including line asymmetries, for the identification of solar atmospheric heating processes has been pointed out and carefully examined. Clearly, this approach is relevant to both solar physics and studies of solar-type stars of different ages and activity levels; it allows to embed solar studies into a broader context.
Regarding CMEs, a major driver of space weather and geomagnetic stars, attention has been paid the EUropean Heliosphere FORcasting Information Asset (EUHFORIA), which is relevant for MHD modelling and the study of the evolution of CMEs in the heliosphere. In this regard, a pivotal aspect is the study of thermodynamic and magnetic properties of CMEs as well as CME forward-modeling, aimed at predicting CME breakouts as well as CME topologies and magnitudes. Relevant spectral line features include Fe XIV and Fe XI data, obtained with existing instruments or available in the archive. Another notable item has been the presentation of long-term variations of solar differential rotation and the solar cycle; the latter still poses a large set of unanswered scientific questions.
§ RETROSPECTIVE AND RECOMMENDATIONS
A key element of BINA-3 is the future availability of the ILMT. The science goals of ILMT include cosmological research such as the statistical determination of key cosmological parameters through surveying quasars and supernovae as well as photometric variability studies of stars, transiting extra-solar planets and various types of transient events. Another aspect consists in the search for faint extended objects like low-surface brightness and star-forming galaxies. The pronounced use of ILMT, typically in conjunction with other available facilities, requires the ongoing pursuit of international collaborations; this activity is pivotal for future success. Another key aspect is the significance of theoretical studies.
Regarding solar physics research, previous work encompasses the study of MHD waves and small-scale transients, with a focus on the solar chromosphere, transition region and corona. Some of this work made extensive use of the EUI onboard of the Solar Orbiter. The study of outer solar atmosphere fine structure utilized the IRIS and the AIA, both onboard of the SDO. Time-dependent coronal studies, especially CMEs, are of great significance for the Earth, such as the onset of geomagnetic storms and the safety of equipment, including those associated with satellite communication[See <https://www.swpc.noaa.gov> for further information.]. Further advances in this field are expected to benefit from additional observational studies as well as advances in theory, particularly the interface of those two. Regarding theoretical work, ongoing and future efforts should continue to focus on 3-D magneto-hydrodynamics studies in conjunction with the adequate inclusion of radiative transfer and statistical phenomena, as well as aspects of chaos theory.
There are other items with the potential for future successful developments. Asteroseismology has been underrepresented in BINA-3. This is a powerful tool in the context of stellar evolution studies and the validation and improvement of stellar parameters; the latter is also relevant in the context of extrasolar planet investigations. Further important aspects concern the study of stellar magnetism and activity. Besides elementary stellar studies, these topics are also of critical importance regarding circumstellar habitability and astrobiology at large <cit.>. Moreover, studies of AGNs and GRBs are cardinal topics beyond solar and stellar physics; they have gained considerable steam within the scientific community.
Processes in the extragalactic objects are characterized by high energy and rich spectra. Among the variety of works presented during BINA-3, studies of active galactic nuclei (AGN) and different transients like gamma-ray bursts (GRB) continue to deserve special attention. The members of BINA have an exhaustive set of instruments available for multi-wavelength observations of these extragalactic sources, yet there is still room for improvement. Considerable advances are attainable both in instrumentation and in techniques of analysis. In the study of intra-night variability of blazars presented in the workshop's programme <cit.>, we noted the lack of international contributors, although these types of objects are in the spotlight of groups working, e.g., at the 6-m telescope of the Special Astrophysical Observatory, located in the North Caucasus region of Russia <cit.>. Given the absence of polarimetric devices for observation with the 3.6-m DOT at the moment, such cooperation could open new opportunities. Connections established on the personal level between the member institutions of BINA and observatories operating big telescopes would facilitate future studies in extragalactic astronomy where the aperture matters.
Similarly, we would recommend establishing collaborations with the institutes operating robotic telescopes for the observation of transients. However, a more radical future step might be an expansion of Indian observational facilities towards other continents, especially South America. A small network of medium-sized fully-robotic telescopes could provide easy access to observations and be used for educational purposes. It would reduce the dependence on astronomical monitoring occurring in South Asia — in consideration of possible drawbacks due to the regional climates.
Last but not least, in the field of data analysis, the leitmotif now is the use of machine learning (ML) and artificial intelligence (AI). This theme was raised several times during the workshop, but we believe that it could find broader applications in projects related to the classification of light curves and spectra. At the same time, we would recommend researchers using ML and AI in their work not to ignore advances in theory, as without proper constraints and background information, these methods might lead to impractical results, especially if based on small samples.
§.§.§ Acknowledgments
The authors are grateful to the scientific and local organizing committees of BINA-3 for inviting them to summarize the workshop and for further assistance in preparing these proceedings.
§.§.§ ORCID identifiers of the authors
0000-0002-1912-1342Eugene Semenko
0000-0002-8883-2930Manfred Cuntz
§.§.§ Author contributions
Both authors equally contributed to this publication.
§.§.§ Conflicts of interest
The authors declare no conflict of interest.
apalike
|
http://arxiv.org/abs/2308.01916v1 | 20230709040958 | Semi Supervised Meta Learning for Spatiotemporal Learning | [
"Faraz Waseem",
"Pratyush Muthukumar"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation
^1Authors are with Cheng Kar-Shun Robotics Institute, The Hong Kong University of Science and Technology, Hong Kong SAR, China. jcenaa}@connect.ust.hk. {cqf}@ust.hk.
^2Authors are with Alibaba Group, China. zhangjin.zsw, zh334251, luomaochun.lmc, yingya.zyy}@alibaba-inc.com. lk158400}@cainiao.com.
^3Authors are with the SMILES LAB at the School of Information and Communication Engineering'an Jiaotong University, Xi'an, China. peiyixuan}@stu.xjtu.edu.
^*Work done as an intern at Alibaba DAMO Academy.
Jun Cen^1,2*, Shiwei Zhang^2, Yixuan Pei^3, Kun Li^2, Hang Zheng^2, Maochun Luo^2, Yingya Zhang^2, Qifeng Chen^1
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Labeled data is hard to come by in the real world. Moreover, a majority of available data comes in the source of video and visual media.
Recent advancements in representation learning have shown great successes in learning rich representations from a variety of inputs including text, images, and videos.
However, these state-of-the-art architectures are data-intensive, whereas meta learning architectures possess unique capabilities of learning new tasks from diverse training tasks and corresponding labels in the few-shot regime.
We apply semi-supervised meta learning to video data for learning spatiotemporal patterns.
We extend work on Masked Autoencoders (MAEs) utilizing the Vision Transformer (ViT) architecture for scalable self-supervised learning in the spatiotemporal domain.
We approached the goal of applying meta-learning to self-supervised masked autoencoders for spatiotemporal learning in three steps.
Broadly, we seek to understand the impact of applying meta-learning to existing state-of-the-art representation learning architectures.
Thus, we test spatiotemporal learning through: a meta-learning architecture only, a representation learning architecture only, and an architecture applying representation learning alongside a meta learning architecture. We utilize the Memory Augmented Neural Network (MANN) architecture to apply meta-learning to our framework.
Specifically, we first experiment with applying a pre-trained MAE and fine-tuning on our small-scale spatiotemporal dataset for video reconstruction tasks.
Next, we experiment with training an MAE encoder and applying a classification head for action classification tasks.
Finally, we experiment with applying a pre-trained MAE and fine-tune with MANN backbone for action classification tasks.
To execute our experiments, we generate a custom small-scale video dataset of 518 human-action classes consisting of 24927 video clips and human-generated annotations sourced from the MiniKinetics-200 and TinyVIRAT datasets. We also modify the existing ViT backbone in existing MAE architectures for small-scale datasets by applying Shifted Patch Tokenization (SPT) to combats the lack of locality inductive bias available in small-scale datasets.
Our experimental results show that fine-tuning on our custom small-scale video dataset outperforms existing pre-trained MAE architectures on video reconstruction tasks. Further, we find that training an MAE encoder with a small-scale ViT backbone on our small-scale video dataset for action classification tasks converges steadily. Finally, we find that applying a pre-trained MAE and fine-tuning with an MANN backbone for action classification tasks is effective on our small-scale video dataset test tasks.
§ INTRODUCTION
Recent advancements in deep learning including the Transformer architecture have shown great success in both vision and language domains learning rich representations from a variety of inputs including text, images, and videos (https://arxiv.org/abs/1706.03762ref: attention is all you need). Models such as BERT have shown success in the semi-supervised regime in denoising messy data and extracting high level embeddings from partially labeled datasets (https://arxiv.org/abs/1810.04805ref: bert). However, real-world labeled data in the format of videos is scarce and unstructured. State-of-the-art representation learning architectures have shown great success in the vision domain in extracting high-level features from images for reconstruction or classification tasks, however, these models require massive amounts of annotated vision data.
The field of meta learning has shown promise in learning high-level features from data in the few shot regime. Moreover, applying meta-learning to existing supervised learning architectures has been shown to allow for more data-efficient models while preserving generalizability to unseen tasks and datasets (https://arxiv.org/abs/1703.03400ref: model-agnostic meta-learning for fast adaptation of deep networks
). We propose applying semi-supervised meta-learning to video data for learning spatiotemporal patterns. We believe that wrapping existing state-of-the-art self-supervised representation learning architectures within a meta-learning framework will allow our architecture to both improve sample efficiency and generalize well to unseen data, particularly in the application of spatiotemporal learning on video datasets. Specifically, we perform experiments in the style of an ablation study to compare the performances of existing representation learning architectures for video data alone, existing self-supervised meta learning frameworks for video data alone, and our formulation of applying meta learning to representation learning architectures for video data classification tasks.
In addition to considering the effectiveness of applying meta learning towards existing representation learning architectures, we perform modifications to perform experiments with the scope of this project. That is, we scale down the vision transformer (ViT) backbone within the existing representation learning architecture for training on our custom small-scale video dataset. We generate this dataset consisting of video clips describing human-object interactions as well as corresponding human-generated annotations.
In this project, we make the following contributions:
* We collect a custom small-scale human-object video dataset built as a composite dataset from existing human-object video sources upon which we preprocess.
* We apply the meta-learning framework to existing self-supervised representation learning architectures and apply our model to downstream tasks including video reconstruction and action classification
* We perform an ablation study to understand the impact of applying meta-learning to existing self-supervised representation learning architectures on action classification accuracy and video reconstruction loss
§ RELATED WORKS
Prior work in the field of representation learning has shown successes in learning rich representations from vision and language domains. Particularly, autoencoder architectures have been proven to be effective in extracting representations from text and images. (https://arxiv.org/abs/2111.06377ref: masked autoencoders are scalable vision learners) proposed applying masked autoencoders (MAEs) for self-supervised learning for vision. By masking random patches of the input images and pre-training an autoencoder to reconstruct the missing pixels, they found that the architecture was able to perform well on the ImageNet dataset compared to similar self-supervised models. Moreover, their architecture was more efficient and scalable for larger models such that transfer performance in downstream tasks outperformed supervised pre-training models. They noted that a masking ratio larger than 75% masked pixels in an image poses as a non-trivial task to current state-of-the-art vision models.
(https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners) builds off of this work by applying masked autoencoders for video data to learn spatiotemporal patterns. The masking process follows similarly from above, however random spacetime patches of videos are masked out rather than pixels during the pre-training step. Their results showed that a masked autoencoder with a masked ratio of 90% outperforms supervised pre-training approaches by a wide margin on both benchmark datasets and real-world video data.
Meta-learning has shown effectiveness in generalizing well to unseen data with sample-efficient architectures in the few shot regime. One such implementation of meta-learning is the Memory Augmented Neural Network (MANN) architecture proposed by (https://dl.acm.org/doi/10.5555/3045390.3045585href: meta-learning with memory-augmented neural networks). The authors propose a black-box meta-learning framework with a two-part architecture. Their architecture included a controller implemented as a sequence model – they utilize an LSTM architure in their implementation – and an external memory module with reading and writing heads implemented with a Neural Turing Machine (NTM) (https://arxiv.org/abs/1410.5401neural turing machines). The LSTM sequence models are used to help a model to learn quickly from data with a small number of examples.
In our review of this space, we have not found existing work applying meta-learning alone towards self-supervised spatiotemporal learning. However, prior research has been done on applying self-supervised meta learning for natural language classification tasks (https://aclanthology.org/2020.emnlp-main.38/href: self-supervised meta-learning for few-shot natural language classification tasks).
Current vision models have become increasingly powerful since the widespread application of the Transformer architecture. The Vision Transformer (ViT) architecture, proposed by (https://arxiv.org/abs/2103.15691ref: ViViT: a video vision transformer), builds upon the self-attention mechanism proposed by (https://arxiv.org/abs/1706.03762ref: attention is all you need) for learning complex high dimensional representations from image datasets. This family of architectures relies on large amounts of image data, typically in the scale of hundreds of gigabytes worth of labelled images to train large architectures with hundreds of millions of parameters.
Some work has been done on scaling down these large-scale ViT architectures while preserving the learned high-level representations.(https://arxiv.org/abs/2112.13492ref: vision transformer for small-size datasets) proposes Shifted Patch Tokenization (SPT) and Locality Self-Attention (LSA) as methods to combat the lack of locality inductive bias available in small-scale datasets.
Existing work on applying representation learning architectures such as MAEs with ViT backbones show incredible performance in video classification and video reconstruction tasks, but are limited in real-world applications due to the data requirements of these sample inefficient architectures. Current research on small-scale ViT architectures perform well on image classification tasks, but have yet to be extended towards video data or applied in the regime of self-supervised learning.
§ METHODS
We approach the goal of applying meta-learning to self-supervised masked autoencoders for spatio-temporal learning using MANNs (memory augmented neural networks), in a similar fashion proposed by (https://dl.acm.org/doi/10.5555/3045390.3045585href: meta-learning with memory-augmented neural networks). In our case, we utilize the masked autoencoder (MAE) approach for initial pre-training, and then fine-tune using the MANN approach, using the MAE encoder as a backbone to the sequence model. In our implementation, we utilize the ViT sequence model scaled down and trained on our small-scale video dataset. We scale down the ViT backbone within the MAE encoder and decoder in a method proposed by (https://arxiv.org/abs/2112.13492ref: vision transformer for small-size datasets), however in their implementation, they focus on image data.
We consider the MAE method proposed by (https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners
) as a baseline for testing the performance of a state-of-the-art classification algorithm that does not use meta learning. We then train the MANN architecture with the ViT backbone end-to-end to evaluate the performance of a solely meta-learning based approach. Finally, we test our proposed combination of MAE with MANN fine-tuning to test if the MAE architecture in combination with meta-learning approaches is more effective in learning spatiotemporal patterns.
One benefit of applying meta-learning in this domain is that if we assume videos of humans interaction with objects share some high level structure, we can combine video clips from various human-object interaction datasets, allowing us to pre-train on more data. These combinations of benchmarks will allow us to pinpoint whether applying meta-learning with MAE is effective for spatiotemporal learning as well as the individual contributions of each.
To summarize, we devised a three-stage approach to reaching our proposed goals:
* Apply pre-trained MAE and fine-tune for video reconstruction downstream task
* Train MANN with MAE encoder on small-scale dataset and apply classification head for action classification downstream task
* Apply pre-trained MAE and fine-tune with MANN backbone for action classification downstream task
Describes a visualization of the model architectures for each of the three approaches we implement.
§ EXPERIMENTS
For the first approach in our technical method, we fine-tune the pre-trained MAE on our small-scale dataset and evaluate against the baseline video MAE model pre-trained on Kinetics-400. We utilize a pre-trained MAE architecture sourced from the authors of the video MAE architecture trained with the ViT-Large backbone on Kinetics-400 with a masking ratio of 90% and 1600 effective epochs (https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners
).
For the second approach in our technical method, we train the MAE autoencoder with our small-scale ViT and fine-tune with a classification head on our small-scale composite dataset. We ran experiments training the full video MAE as well as training the video MAE outfitted with a classification head. Additionally, we evaluate training the video autoencoder with and without masking to analyze the difference in training loss and classification accuracy. Note that the autoencoders used for training in this set of experiments utilize our small-scale ViT backbone which implements Shifted Patch Tokenization (SPT) to preserve locality-specific representations typically lost with small-scale datasets. Further, since the original work proposing small-scale ViT architectures implemented a small-scale ViT for image classification rather than video classification, we extend upon their work by including spacetime attention to their small-scale ViT architecture in order to support 3D video data in the format of time-indexed series of 2D images.
§.§ Datasets
For our experiments, we seek to perform spatiotemporal learning on video datasets. Initially, we started by utilizing the Kinetics-400 video dataset consisting of 400 human-action classes each with at least 400 video clips (https://arxiv.org/abs/2007.07355TinyVIRAT: low-resolution video action recognition). In total, the dataset consisted of 306,245 video clips each around 10 seconds in length with a resolution of 224 x 224 pixels. However, the size of this dataset is over 300 GB, and while it can be effectively used for the ViT-base backbone with 84,943,656 parameters within the MAE encoder of the existing state-of-the-art representation learning architecture for video learning, it was not a feasible dataset within the scope of our project. Instead, we developed a small-scale ViT backbone within the MAE encoder architecture which instead has 3,109,008 parameters. Correspondingly, we sought to scale down our video dataset used for training our small-scale ViT backbone.
One aspect we considered while building our dataset was since we apply the MANN meta-learning framework for self-supervised spatiotemporal learning, we can combine multiple datasets of varying action class distributions together into a composite dataset where each unique action class could be considered a new task during black-box adaptation with the MANN architecture. As a result, we were not limited to a single data source when constructing our small-scale dataset and instead, we utilized human-action video clips and annotations from a variety of input sources to generate our small-scale video dataset. In a semi-supervised dataset, labels are sparse, hence we hypothesize that a meta-learning based approach that learns quickly from a small number of examples can excel where standard fine-tuning may not be sufficient.
Our composite small-scale video dataset was sourced from the Kinetics-400, MiniKinetics-200, and TinyVIRAT datasets. MiniKinetics-200 is a subset of the Kinetics dataset consisting of the 200 human-action classes with the most training examples and TinyVIRAT is a video dataset containing real-life tiny actions in videos collected from low resolution video cameras consisting of 12829 video clips. Our small-scale video dataset contains 24,927 video clips amongst 518 human-action classes. Each video clip in our dataset consists of 100 frames at a temporal resolution of 10 FPS, meaning that each clip is around 10 seconds in length. We scale all clips in our dataset to a resolution of 64x64 pixels to perform efficient training and achieve our project goals with the computational resources available to us. All spatial and temporal resolution downscaling was performed using the OpenCV Python package.
We split our dataset into training and testing splits such that we reserve 18406 videos over 414 action classes for training and 6521 videos over 104 action classes for testing. For our implementation utilizing meta-learning for self-supervised spatiotemporal learning, each human-action class can be formulated as a distinct task, where our task training-testing split is roughly an 80-20 split. Kinetics-400, Mini-Kinetics200, and TinyVIRAT all include human-generated annotations of the video clips, which define the locations of individual video clips within the action classes.
§ RESULTS
For the first approach in our technical outline, we provide cross entropy loss results of a pre-trained MAE fine-tuned on our small-scale video dataset against a pre-trained baseline MAE architecture trained on Kinetics-400. For the sake of brevity, we provide experimental results for every 20th frame in the 100 frame video samples of our small-scale video dataset. We evaluate the pre-trained MAE baseline against our fine-tuned MAE model on the testing set of our small-scale video dataset consisting of 6521 100-frame video clips of 64x64 pixel resolution over 104 human-action classes. Table <ref> describes the averaged cross entropy loss for every 20th frame in the 100-frame video clips across the test set for our fine-tuned model compared against the pre-trained MAE baseline. The overall averaged cross entropy loss for all 100-frames across the test set in our pre-trained model was 0.1776, whereas the pre-trained MAE baseline was 0.1781.
We also provide a video reconstruction visualization for a single video in the testing split of our small-scale video dataset. Since we cannot show all 100 frames of this video reconstruction, we show a visualization of every 20th video frame reconstructed by our fine-tuned model in Figure <ref>.
For the second approach in our technical outline, we evaluate training our modified video MAE architecture with a small-scale ViT backbone end-to-end as well as training with a classification head attached for action classification tasks. These experiments were conducted on the TinyVIRAT dataset with 26 action classes, so we can formulate the experimental setting as a 26-way multi-class classification task. The end-to-end video MAE architecture with a small-scale ViT backbone contains 3.1 million parameters, while the video MAE architecture with the classification head contains 2.7 million parameters.
The top-1 accuracy for the end-to-end video MAE architecture with a small-scale ViT backbone was 37% and the top-5 accuracy was 75%. Figures <ref> and <ref> describe the training and validation curves of this end-to-end model. Note that since we do not normalize the loss value with the number of examples in the batch, the magnitude of the loss is not necessarily indicative of the model performance.
Additionally, we evaluate training the video auto encoder outfitted with a classification head with and without masking for our 26-way multi-class classification task. We consider a masking ratio of 80% when implementing masking. We find that the top-5 performance on the TinyVIRAT dataset is 76% with masking and 74.5% without masking. Figures <ref> and <ref> describe the training and validation curves for the video autoencoder with a classification head with masking implemented. Figures <ref> and <ref> describe the training and validation curves for the video autoencoder with a classification hea trained without masking. Figures <ref> and <ref> describe the validation split accuracy curve over training for the masked autoencoder and the autoencoder without masking, respectively.
When using a video autoencoder with shift patch tokenization, and a reduced number of parameters, in only 10 epochs of pretrainign and 10 epochs of finetuning, we get 46.8% top1 accuracy, which is significantly higher than the previous methods we tested, indicating the importance of using shifted patch tokenization and not masking during the finetuning phase.
§ CONCLUSION
To summarize, we apply self-supervised meta-learning for spatiotemporal learning on video data. We extend existing representation learning architectures for vision and video data and apply meta-learning through the black-box Memory Augmented Neural Network (MANN) architecture. We evaluate the effectiveness of applying MANN alongside Masked Auto Encoders (MAE) by tackling our goals for this project in a three stage approach.
Firstly, we experiment with fine-tuning a pre-trained MAE architecture on our custom small-scale video dataset. This small-scale video dataset is built and collected by combining multiple human-action video datasets such as the TinyVIRAT, Kinetics-400, and MiniKinetics-200 datasets. Our experimental results of our fine-tuned model against a pre-trained MAE baseline shows that our model outperforms the pre-trained MAE architecture in terms of averaged cross entropy loss across all frames of the testing split videos in our small-scale dataset with a value of 0.1776 compared to the baseline's averaged cross entropy loss of 0.1781. However, since the difference between these two values are negligible – our fine-tuned model outperforms the baseline by 0.3% – we note that there is not a significant enough improvement from fine-tuning a pre-trained MAE architecture on our small-scale video dataset alone. We anticipated these results and hypothesize that because the pre-trained model is very large and trained on hundreds of gigabytes worth of Kinetics-400 data, whereas we fine-tune on our small-scale dataset consisting of less than 25,000 video clips, fine-tuning this architecture directly will not have a noticeable impact on predictive power. Nevertheless, our fine-tuned model slightly outperforms the baseline pre-trained MAE architecture, however there are not enough results or significant enough a difference to suggest a trend.
Next, we experiment with training an end-to-end video MAE architecture with a modified small-scale ViT backbone. We evaluated this architecture on the TinyVIRAT dataset and formulated the problem as a 26-way multi-class video classification problem. The top-1 accuracy score was 37% and the top-5 accuracy score was 75%. We believe this is a significant accomplishment because the majority of existing benchmarks for the TinyVIRAT challenge utilize very large encoder architectures with hundreds of millions of parameters. However, we are able to achieve competent results on the TinyVIRAT dataset with a small-scale ViT backbone with just 3 million parameters.
Finally, we experiment with training a video auto encoder architecture with a classification head and evaluating the effect of masking. We similarly evaluated both the masked and non-masked architectures on the TinyVIRAT 26-way multi-class video classification task and find that the top-5 performance for the masked auto encoder architecture with an 80% masking ratio was 76% and for the auto encoder without masking was 74.5%. Comparatively, this shows that applying masking to the architecture improves action-class classification task performance. However, with just 50 epochs used for training, we would need to continue running experiments and fine-tune the masking ratio hyperparameter to confirm this trend.
§ FUTURE WORK
In the future, we want experiment with fine tuning the MANN architecture with and without a pre-trained video MAE. Another test we want to try is to replace MANN with other meta-learning implementations such as Model Agnostic Meta-Learning (MAML) proposed by (https://arxiv.org/abs/1703.03400ref: model-agnostic meta-learning for fast adaptation of deep networks
). We can also experiment with integrating text signals such as utilizing BERT pretrained embeddings generated on descriptions of videos in the action-class classification task setting. We have performed significant contributions to the TinyVIRAT codebase and could consider contributing to open-source implementations by providing our codebase for small-scale video MAE and meta-learning capabilities. Additionally, we have introduced a hook to export latent video frame representations, which can be used for future work by us and others. We believe we have created very useful building blocks for building more advanced vision transformers for the spatiotemporal learning domain.
9
@articlevaswani2017attention,
title=Attention is all you need,
author=Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, Łukasz and Polosukhin, Illia,
journal=Advances in neural information processing systems,
volume=30,
year=2017
@articledevlin2018bert,
title=Bert: Pre-training of deep bidirectional transformers for language understanding,
author=Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina,
journal=arXiv preprint arXiv:1810.04805,
year=2018
@inproceedingsfinn2017model,
title=Model-agnostic meta-learning for fast adaptation of deep networks,
author=Finn, Chelsea and Abbeel, Pieter and Levine, Sergey,
booktitle=International conference on machine learning,
pages=1126–1135,
year=2017,
organization=PMLR
@articlekay2017kinetics,
title=The kinetics human action video dataset,
author=Kay, Will and Carreira, Joao and Simonyan, Karen and Zhang, Brian and Hillier, Chloe and Vijayanarasimhan, Sudheendra and Viola, Fabio and Green, Tim and Back, Trevor and Natsev, Paul and others,
journal=arXiv preprint arXiv:1705.06950,
year=2017
@articlelee2021vision,
title=Vision transformer for small-size datasets,
author=Lee, Seung Hoon and Lee, Seunghyun and Song, Byung Cheol,
journal=arXiv preprint arXiv:2112.13492,
year=2021
@inproceedingsxie2018rethinking,
title=Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification,
author=Xie, Saining and Sun, Chen and Huang, Jonathan and Tu, Zhuowen and Murphy, Kevin,
booktitle=Proceedings of the European conference on computer vision (ECCV),
pages=305–321,
year=2018
@inproceedingsdemir2021tinyvirat,
title=Tinyvirat: Low-resolution video action recognition,
author=Demir, Ugur and Rawat, Yogesh S and Shah, Mubarak,
booktitle=2020 25th International Conference on Pattern Recognition (ICPR),
pages=7387–7394,
year=2021,
organization=IEEE
@inproceedingshe2022masked,
title=Masked autoencoders are scalable vision learners,
author=He, Kaiming and Chen, Xinlei and Xie, Saining and Li, Yanghao and Dollár, Piotr and Girshick, Ross,
booktitle=Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
pages=16000–16009,
year=2022
@inproceedingsarnab2021vivit,
title=Vivit: A video vision transformer,
author=Arnab, Anurag and Dehghani, Mostafa and Heigold, Georg and Sun, Chen and Lučić, Mario and Schmid, Cordelia,
booktitle=Proceedings of the IEEE/CVF International Conference on Computer Vision,
pages=6836–6846,
year=2021
@bansal2020self,
title=Self-supervised meta-learning for few-shot natural language classification tasks,
author=Bansal, Trapit and Jha, Rishikesh and Munkhdalai, Tsendsuren and McCallum, Andrew,
journal=arXiv preprint arXiv:2009.08445,
year=2020
@articlefeichtenhofer2022masked,
title=Masked Autoencoders As Spatiotemporal Learners,
author=Feichtenhofer, Christoph and Fan, Haoqi and Li, Yanghao and He, Kaiming,
journal=arXiv preprint arXiv:2205.09113,
year=2022
@inproceedingssantoro2016meta,
title=Meta-learning with memory-augmented neural networks,
author=Santoro, Adam and Bartunov, Sergey and Botvinick, Matthew and Wierstra, Daan and Lillicrap, Timothy,
booktitle=International conference on machine learning,
pages=1842–1850,
year=2016,
organization=PMLR
@articlegraves2014neural,
title=Neural turing machines,
author=Graves, Alex and Wayne, Greg and Danihelka, Ivo,
journal=arXiv preprint arXiv:1410.5401,
year=2014
|
http://arxiv.org/abs/2307.05615v1 | 20230711014020 | Laser light scattering (LLS) to observe plasma impact on the adhesion of micrometer-sized particles to a surface | [
"D. Shefer",
"A. Nikipelov",
"M. van de Kerkhof",
"V. Banine",
"J. Beckers"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
Eindhoven University of Technology, Department of Applied Physics, Eindhoven, 5600 MB, The Netherlands
[email protected]
ASML, Veldhoven, 5504 DR, The Netherlands
Eindhoven University of Technology, Department of Applied Physics, Eindhoven, 5600 MB, The Netherlands
ASML, Veldhoven, 5504 DR, The Netherlands
Eindhoven University of Technology, Department of Applied Physics, Eindhoven, 5600 MB, The Netherlands
ASML, Veldhoven, 5504 DR, The Netherlands
Eindhoven University of Technology, Department of Applied Physics, Eindhoven, 5600 MB, The Netherlands
Laser Light Scattering (LLS) method, combined with a long-distance microscope was utilized to detect micrometer-sized particles on a smooth substrate. LLS was capable to detect individual particle release, shrink, or fragmentation during exposure to a plasma or a gas jet. In-situ monitoring of hundreds of particles was carried out to investigate the effect of hydrogen plasma exposure on particle adhesion, morphology, and composition. LLS was calibrated with monodisperse melamine resin spheres with known sizes of 2.14 μm, 2.94 μm, and 5.26 μm in diameter. The lowest achievable noise level of approximately 3% was demonstrated for counting 5.26 µm spherical melamine particles. The accuracy for melamine particle size measurements ranged from 50% for 2.14 μm particles to 10% for 5.26 μm particles. This scatter was taken as the imprecision of the method. Size distribution for polydisperse particles with known refractive index was obtained by interpolating to an effective scattering cross-section of a sphere using Mie theory. While the Abbe diffraction limit was about 2 μm in our system, the detection limit for Si particles in LLS according to Mie approximation was assessed to about 3 μm, given the limitations of the laser flux, microscope resolution, camera noise, and particle composition. Additionally, the gradual changes in forward scattering cross-sections for Si particles during the exposure to the hydrogen plasma were consistent with Si etching reported in the literature.
Laser light scattering (LLS) to observe plasma impact on the adhesion of micrometer-sized particles to a surface
J Beckers
October 2023
================================================================================================================
§ INTRODUCTION
Under some conditions, plasma exposure is known to cause the release of nanometer and micrometer-sized particles from surfaces.<cit.> Technologies sensitive to plasma-induced particle release are of special interest. For example, NASA’s study of the lunar and Mars surfaces confirmed suspended dust without settling.<cit.> This effect is attributed to UV or plasma charging and may have a negative impact. For example, the mobility of micrometer-sized particles in plasma presents a challenge to solar panel longevity. In another example, a reticle (integrated circuit photo-mask), used in Extreme Ultraviolet (EUV) lithography is highly sensitive to contamination with particles of 20 nm and larger.<cit.> Such particles may deposit on reticles even in the extremely clean environments of an EUV scanner in the presence of EUV-induced plasma.<cit.> Finally, in nuclear fusion plasma vessels (e.g. in ITER), plasma-facing walls releasing particles may deteriorate the gas mix. Because of tritium gas held in wall materials, dust generation in ITER is a serious concern, both from an erosion aspect and due to possible impurity release into the plasma.<cit.> With respect to all these applications, the study of the behavior of micrometer-sized particles attached to a surface and interacting with plasma is important. To enable further studies, the development of new in-situ diagnostic tools is highly relevant.
Traditionally used in the semiconductor industry, Laser Light Scattering (LLS) detects single particles on smooth or patterned substrates by analyzing light scattered into different angles from a relatively small illuminated spot (typically, around 10 µm).<cit.> Particles bigger than 1 µm scatter most of the light in the forward direction. Hence, a reflective substrate is a convenient method to improve such particle visibility.
With respect to the system of a particle attached to a surface, the particle adheres due to the combination of electrical, van der Waals (vdW), and capillary forces, as well as due to the particle’s chemical interaction with the surface. Adhesion depends on the particle’s size, composition, and morphology. A change in one of these parameters also affects the forward-scattered light intensity; hence, this can be used as a diagnostic method. In our work, we apply the LLS method, combined with long-distance microscopy, to image micrometer-sized particles. It will be demonstrated that the LLS method can be adapted in order to in-situ observe micrometer-sized particles on a surface placed in plasma or in other stressed conditions such as those caused by a gas jet. The advantage of the LLS method over traditional SEM measurement used in morphological diagnostics is the non-invasive in-situ manner of measuring which directly shows the impact of plasma treatment on particles during exposure.
§ APPARATUS AND DESIGN
Particles were deposited on the metallic side of the substrates; substrates used in all experiments were 1 inch in diameter polished sapphire wafers with 100 nm chromium coating. The mirror-finished wafers enable LLS to be operated in the dark field mode. The chromium coating is known to be robust against hydrogen embrittlement<cit.> and electrically conductive. The latter is necessary for SEM imaging before or after plasma exposure. Silicon (Si) particles were chosen in this work for the demonstration of the method because of the abundance of scientific literature on silicon including its etching by hydrogen plasma.<cit.> Melamine particles were selected because of their narrow standard deviation in size (when purchased commercially from Sigma Aldrich) and matte surface. Properties of the particles used in the experiments are listed in table <ref>.
The chromium substrates were contaminated with micrometer-sized particles using a Branson sonifier SFX 150 (40 kHz actuated tip). The sonifier disaggregated large clusters of particles by bringing its tip in contact with the edge of contaminated wafers. The average distance between the particles significantly exceeded their size (see Fig. <ref>), which suppressed the effects of interference and simplified imaging, sizing of particles, and analysis of the interaction with plasma.
A schematic overview of the used setup is depicted in Figure <ref>. The setup comprised two vacuum chambers (a main chamber for the plasma and gas jet exposures and a load-lock chamber) separated by a VAT gate valve that remained closed during experiments. The main chamber was a 20x20x20 cm^3 cube with one of the flanges used for connection to the plasma source and the gas supply. A second flange of this chamber had an integrated window with an anti-reflective coating for LLS imaging. A third flange of this chamber was equipped with Philips vacuum gauges (HPT 200 Pirani/Bayard-Alpert and PPT 200 AR) which were both hydrogen calibrated. The flange with the plasma head also held a stainless steel wafer holder and allowed the swapping of wafers via the load-lock. The ultimate pressure in the vacuum chamber, achieved by a turbo-molecular pump (Pfeiffer THU 200 MP) and a scroll dry pre-pump (Edwards XDS10), was 10^-4 Pa.
During the experiments with plasma exposures, hydrogen was supplied to the main chamber at 30 sccm, resulting in a steady state pressure in the range of 1-10 Pa (mostly 5 Pa) without throttling the turbo-pump. The hydrogen plasma was driven by an Electron Cyclotron Resonance (ECR) plasma source (Aura-Wave, Sairem) at 100 W of RF power providing T_e ≃ 5 eV, E_i ≃ 15 eV, and ion flux toward the wafer of about F ≃ 1 A/m^2 according to Shirai et al<cit.>. Under these conditions, the induced hydrogen radical (H^*) flux is expected to be 10 to 100 times higher than the H^+ flux due to a ∼10% chance of H^* association at the stainless steel walls of the main vacuum chamber compared to the 100% chance of H^+ ion neutralization at the walls.<cit.> Moreover, recombination of H_3^+ ions results in the generation of ∼2 radicals per event.<cit.> The selected conditions in this study featured a hundredfold more intense flux and approximately 5 times higher energy of ions compared to EUV-induced plasma<cit.>. Hence, the exhibited results may be considered as the exposure to EUV plasma afterglow, accelerated at around 100 times.<cit.>
For typical experiments a sample with particles was brought through the load-lock chamber to the middle of the main chamber (using a manipulator) and mounted vertically, facing the window with an anti-reflecting coating. A pulsed laser (EverGreen EVG00200, 70-200 mJ and 10 ns long pulses at 532 nm with 100-1000x attenuation by a grey filter), illuminated the wafer with a repetition rate of 0.71 Hz (1.4s between pulses). The laser beam, guided by mirrors, was expanded to 0.5 cm in diameter by two plano-convex lenses and entered the chamber through the window at about 10^∘, reflected from the metal surface of the wafer, exited the chamber at 10^∘, and was finally directed to a beam dump. The light scattered by particles on the surface was collected by a long-distance microscope (Distamax K2) with a working distance of 180 mm and a fully open aperture (diameter of 5 cm) with a CMOS camera (FLIR Grasshopper3) mounted to it. Pulsed laser illumination was chosen instead of illumination by a CW laser to reduce the blurriness caused by the vacuum pump-induced vibrations transferred to the microscope.
The camera shutter was synchronized (Fig. <ref>) with the laser pulse by a signal delay generator (Model 577, BNC). Relatively short (140 µs) camera exposures helped to reduce the impact of the light from the plasma on the image background signal. The camera was configured to save 24-bit images with a resolution of 4,096 x 2,160 pixels. The pixel size was 3.45 x 3.45 µm^2, the quantum efficiency was 64%, and the dynamic range was 65.15 dB. The maximal camera noise was 40.3 dB. The CMOS matrix size in combination with magnification by Distamax K2 and the distance to the sample (around 18 cm) produced a field of view (FoV) of 3 x 2 mm. This microscope FoV with a fully opened diaphragm was aligned with the illumination laser spot and the contaminated center of the wafer. The following camera settings were used: gain 48, gamma 0, black level 0, balance ratio 1.14, digital zoom - off, picture enhancer - off, full automatic control - off, auto exposure - off, auto white balance - off, black & white compensation – off. The camera's gain had the greatest influence on the recognition of particles in post-processing steps.
The acquired images were analyzed by a self-developed Python script. This script extracted the number of particles, their coordinates, and their total integrated intensities and sizes. The way for the size distribution of the particles was found using Mie theory is discussed below. To minimize the impact of laser beam power density fluctuations, the script applied a running average of 5 over the images, which was found to be an optimal value for the trade-off between the noise level and the time resolution achieved. The averaged total integrated scattering (TIS) of an image was computed by the script by summing the intensities of all pixels.
The main chamber was also equipped with a flushing jet, which exhausted nitrogen gas pulses through a 4 mm tube placed at a 5 mm distance from the wafer and facing its center at 45^∘. This flushing could be used to remove loosely bound particles from the substrate when the shear force exceeds the vdW force with which the particles are bound to the surface. The pulsed flushing was realized through a quick valve (DVI 005 M Pfeiffer) and a calibrated orifice (1.016 mm, Swagelok) that limited the flow. The pressure in the nitrogen line was measured by a Pfeiffer gauge (CPT 200 DN). The "flushing" jet could reach up to 6 nlm at the peak of the pulse. The main chamber had a bypass line to a volume extension vessel of 100 liters, separated from the main chamber by a VAT HV gate valve. During the flushing experiments, the turbo-pump was switched off and the bypass line was open. During plasma experiments, however, the bypass line remained closed. The extended vessel had its own pre-pump (Leybold SCROLLVAC 10). The sum productivity of the two pre-pumps for flushing experiments resulted in about 5 l/s at 100 Pa. The flushing pulses of 100 ms to 20 s were limited by the pre-pump productivity: long flushing pulses increased the pressure in the main chamber at the rate of 10 Pa in 10 s.
In addition, to ensure the accuracy of the LLS setup calibration for measuring the sizes of silicon particles, a sample with silicon particles was additionally (measured on a similar, but not the same sample) qualified using SEM. The size distribution diagram obtained by SEM in a scanned area of 3x3 mm and analyzed by self-developed software was compared with the size distribution diagram obtained by LLS.
§ SETUP CALIBRATION
The LLS technique enables monitoring of changes in the number of attached particles (Fig. <ref>), as well as changes in the size distribution during exposure to plasma and flushing. The Figure shows the stages of image processing of Si particles before and after 6h of exposure to hydrogen plasma. The image clearly shows a change in the number of particles. In order to demonstrate the stability of the optical system, a seven-hour measurement of fixed-size particles (melamine) is used to calibrate the counting of particle numbers (see section <ref>). Furthermore, a calibration for obtaining particle size distributions is performed based on Mie theory with a correction for the refractive index (see sections <ref> and <ref>). Finally, in section <ref> the calibration of the total substrate scattering will be demonstrated.
§.§ Particle number evaluation
Evaluating the number of particles on the surface is challenging. For example, the resolution of the long-distance microscope is limited by the Abbe diffraction limit determined by the closest distance at which two separate sources of light can be distinguished from one another. This limit is expressed by<cit.>
d ≈λ/2NA
where d is the minimum resolvable distance between two sources of scattered light, λ is the wavelength of the laser light (532 nm) and NA is the numerical aperture (which in our configuration equals 0.137). Therefore, the resolution of our system is limited to approximately 1.9 μm.
The imaging of particles is limited not only by Abbe diffraction but also by the physical vibrations of the optical system, and variations of the particle shape and composition. In our experiments, the influence of camera noise, intensity fluctuations of the laser beam, and laser multimodality were also noted. Due to the limited coverage of these effects in the literature, comparisons were not made. Experimental uncertainties can be evaluated from measurements of scattering light from a stationary sample without disturbances. To enable this evaluation, a 7-hour-long imaging experiment of highly monodisperse 5.26 µm melamine spheres (see Table <ref> with samples) was conducted. Note that in this experiment no flushing or plasma exposure was applied. The results (Fig. <ref>), demonstrate high laser stability and low counting uncertainty. In this experiment, the laser illumination and camera settings were identical to the experiments with plasma and flushing. It was shown that the dispersion of the number of detected particles was about 3% (which is the lowest achievable noise level) with no long-term trends.
§.§ Size distribution of particles in LLS
Knowing the size distribution of processed particles is important. For instance, if large particles are more subjective to external stress factors, lowering their adhesion, such as those induced by exposure to plasma or a gas jet, the size distribution could shift toward smaller sizes. In another example, if exposure to plasma would lead to a developed surface and, thus, to a higher reflection coefficient of the incident light, the particles under the detection limit would become visible again. The particles that were already above the detection limit would shift toward larger sizes.
The determination of the particle size distribution is even more complicated than the counting of particles. As generally known, CCD and CMOS cameras can be subjected to an effect called "blooming".<cit.> This blooming means that oversaturated pixels leak excess charge to their neighboring pixels. This process propagates until it reaches the edge, visibly and virtually enlarging the particle. Illumination of the entire particle requires sufficient illumination, and most of the particles under study scatter light in the flat Top-Hat regime, which means oversaturation of the pixels' capacity. Hence, the detected particle size as a number of bright pixels above the threshold is not consistent with the true particle size. A 2 μm particle occupied around 50 bright pixels (about 7 pixels in diameter) on the camera when in FoV. The only invariant in this problem is the integral of the photo-induced electrons in the camera's matrix or, in other words, the scattering efficiency of individual particles.
Additional filtering must be applied before integrating the intensities of the pixels imaging the particles. After averaging the intensities of 5 images of 5 laser shots and applying the threshold value, the script filters tiny features (below 10 bright pixels in size). There are two reasons for this filtering. The first reason is that the high camera gain (max value, 48), used for high sensitivity, produces a few hot pixels that occur even without laser illumination and do not correspond to an actual signal. These hot pixels must be removed. The second reason relates to the presence of particles with sizes close to the detection limit. Due to the fluctuating laser intensity, these detections can appear and disappear from the detection region, significantly enhancing the noise level. Thus, by removing them, we focus on the residual population of particles that can always be identified with high confidence.
The correct approach would be to look at the scattering intensity of individual particles. As is generally known, particles of several micrometers in size obey Mie scattering theory.<cit.> The algorithm processing the collected images worked as follows. First, the scripted averaged intensities of 5 captured frames. Second, after applying the threshold, the intensities of images of the particles with an area larger than 10 pixels were integrated. Third, the scattering cross-section of the particle was calculated by multiplying the total intensity by the particle size with a constant, which is a fitting parameter to this model (see Eq. 2). Finally, an equivalent sphere with the same scattering cross-section and a refractive index was calculated using Mie theory, from which the size of the sphere/particle was derived. Therefore, measured scattering cross-sections can be translated into actual particle sizes using the Mie model for the light scattering by an individual particle. For this, a Mie calculator<cit.> was used to evaluate the effective cross-sections of the particles for different particle sizes (from 0.1 to 7 µm). The absorption of light by the particles was not taken into account in the calculations due to a lack of available data. The results of the calculations for particles with a variety of refractive indices n from 1.87 to 4.15 and the light collected in the NA corresponding to the microscope are plotted in Figure <ref>.
In the Mie model, a spherical particle is situated in vacuum and emits light in all directions. Particles whose sizes are several times larger than the wavelength of the incident radiation predominantly scatter light forward and backward. We considered a model in which particles are positioned on a reflecting substrate, thus collecting only a portion of the forward and backward scattering into the NA of the microscope (NA = 0.137 for an objective lens with a diameter of 5 cm and a distance of 18 cm from the particles). It is worth noting that near-field effects due to reflection from the substrate were not taken into account. All calculations were performed assuming an isolated particle in vacuum with scattering confined to the chosen NA of the microscope.
This graph shows that the particle's composition (i.e. the particles' refractive index) is more important for bigger sizes. Smaller particles are more sensitive to shape alterations. Our approach is to measure the scattering efficiency for the particles of known size and composition (in our case, monodisperse melamine spheres) as calibration. After this, for any material (i.e. refractive index) of interest, the cross-section of each particle can be translated into the size using the corresponding calibration curve from Figure <ref>.
§.§ Effective scattering cross-section calibration
In order to use the curves from Figure <ref>, they have to be calibrated. The measured intensities were fitted with the Mie curve. The results of this fit can be seen in Figure <ref>. The arrows indicate the measured cross-sections. The blue dashed line indicates the I_o value and can be considered as the detection limit of this method (it is attributed to the camera's noise which is of the same size as the min detected particles). The sizes of the particles were declared rather monodisperse, according to the manufacturer, with only a small standard deviation (see table <ref>), while the measured intensities had some uncertainty. The scattering cross-sections of the melamine particles were fitted using the formula
I_ec = (1 / α)· A · I_m + I_o
where I_ec is the effective scattering cross-section and I_m is the particle intensity measured by LLS. The constant A equals 700 and is related to the conversion of the laser intensity to the camera counts (or pixel counts). The constant α is the intensity correction factor. The applied laser intensity changed from 1x, to 14x and to 20x depending on the size of the particles,i.e. 2.14, 2.94, and 5.26 µm particles respectively. Therefore, for the purpose of laser intensity normalization, the intensity factor α was taken equal to 1, 14, and 20 for measurements on 5.26, 2.94, and 2.15 µm-particles respectively. The parameter I_o remained constant for all fits and was taken equal to 8.5 μ m^2. Physically it can be attributed to the losses of higher orders of diffraction, reflections from substrate asperities, and camera noise.
The uncertainty of the cross-sections (and related to it the size uncertainty which was nominated by a supplier) can be considered as error bars of the method. For example, the determination of the size of the 2.14 µm particles has an uncertainty of about ±1 µm which is 50% of their size. It explains why 2.14 and 2.94 µm particles appear to have the same scattering cross-sections. At the same time, the determination of the size of the 5.26 µm particles has an uncertainty of about ±0.5 µm which is only 10% of their size.
§.§ Calibration of the total substrate scattering
In addition to the measurements of the number of particles and the particle size (distribution), another possibility is to look at the total integrated scattering from the field of view of the microscope. Technically, the summed and averaged intensity of all pixels is like an analog signal and, therefore, is more reliable as it avoids any image processing other than thresholding for noise removal.
As mentioned, particles of several micrometers in size - as is the case here - obey Mie scattering theory: the scattered intensity is proportional to the particle cross-section (or to r^2 of the particle, where r is the radius) and depends on multiple parameters such as n, k and D/λ, and the polarization of the incident and collected light.<cit.> For instance, melamine resins have n = 1.872, k = 0 (extinction coefficient is approximately zero for melamine-based materials in the visible range of wavelengths<cit.>), D/λ is equal to 4.0, 5.5, 9.9 (for 2.14, 2.94 and 5.26 μm particles respectively). The incident light in our experiments was polarised perpendicular to the plane made up by the incoming beam, the reflecting beam, and the camera. The reflected light was not measured but expected to remain unchanged for particles significantly exceeding the wavelength of the radiation. A change in one of these parameters can be diagnosed by the TIS approach.
The resolution limit of the TIS can be derived by matching it, again, with the Mie calculations for the given size, reflective index, and NA. The amount of scattering by a single particle was obtained by dividing the TIS by the number of detected particles of fixed size (melamine samples in table <ref>). The sizes of the particles were taken according to the values declared by the manufacturer. The results of this calibration (Fig. <ref>) show a perfect match with the previously calibrated scattering cross-sections which proves that imposed filtering, thresholding, and image processing used in the previous subsection do not contribute to the uncertainty in size determination significantly. The good match is explained by testing monodisperse spheres with low standard deviation. When applying the TIS signal for polydisperse particles, the match will be less good. Therefore, it can be concluded that the resolution of the TIS measurements and the effective scattering cross-section of individual particles is the same.
§ RESULTS FOR LLS MEASUREMENTS OF SILICON PARTICLES EXPOSED TO FLUSHING AND PLASMA
Silicon particles were exposed to a series of external stress factors such as flushing and plasma. The sequence of flushing-1 (10 min), plasma exposure (24 h), and flushing-2 (10 min) was applied to a wafer contaminated with Si particles. The flushing power was selected based on the median considerations. The flow must be strong enough to remove a noticeable amount of particles (exceeding the noise level of about 3% as obtained in the calibration section). Physically, this would imply that the flushing shear force and the average adhesion force are comparable. Flushing removes particles, while adhesion keeps them in place. If a particle remains on the substrate after flushing, it means the adhesion force is equal to or greater than the removal force. The flushing (using nitrogen gas) used in the sequence consisted of 3-second long pulsed exhausts (6 nlm flow) at a frequency of 0.01 Hz (every 100 sec). Each flushing campaign lasted 10 min. Between two flushing campaigns, the samples were exposed to the hydrogen ECR plasma with the parameters described before. The quantification of the results used the calibrations described in the previous section.
The top graph in Figure <ref> shows the derived number of particles recorded over the experiment. The types of exposures (flushing or plasma) are mapped in different colors. Baselines (no exposures, only pressure changes) are shown in red, flushing campaigns are shown in green and the plasma exposure is shown in yellow. The plot shows that a significant amount of particles was flushed after the first few pulses. Further flushing appears to be ineffective, meaning that the remaining particles are attached with a force exceeding the applied shear force. The intermediate part of the experiment, during plasma exposure, clearly shows that the number of particles monotonically decays over the exposure which indicates the effect of plasma exposure on the particles' adhesion. This effect is the quantification of the impact shown in the grabbed images from the camera (Fig. <ref>). The bottom graph in Figure <ref> shows the TIS signal which correlates with the top graph and confirms that the intensity drop correlates with the number of scattering centers. The more rapid decay of the TIS signal compared to that of the number of particles during the first hour of plasma exposure needs more investigation. However, hypothetically, this effect could be explained by the presence of a native oxide shell or an adsorbed water layer around the particles that have different n and k (i.e. lower scattering), the oxide shell disappears after the first exposure to hydrogen plasma. After this phase, the scattering is proportional to the number of particles.
The interpretation of the gradual decrease of Si particles during plasma exposure can be the following. First, upon plasma impact, a particle may develop asperities across its surface which reduces the effective vdW force which, in turn, promotes the specific particles to be released.<cit.> An alternative could be the weakening of the interfacing (binding) atomic layers mechanism, e.g. removal by plasma of intermediate adsorbate layers or removal of water forming hydrogen bridges.<cit.>. Another possible explanation could be the etching of the particles' material. The silane molecule SiH_4 is a formation product of sputtered Si atoms reacting with free hydrogen radicals, and it is volatile under our conditions. If the particles - due to this etching - shrink below the detection limit, they disappear from the sub-set of particles detected by the script, and the number of particles is reduced. The second flushing campaign was not necessary due to the lack of remaining measurable particles. Overall, these measurements show that the particles with the adhesion force exceeding the shear force during the first flushing campaign became loose due to plasma exposure. The results are consistent with literature data about the etching of silicon in hydrogen plasma.<cit.>
The histograms in Figure <ref> show the comparison of the size distributions of Si particles (black bins) after deposition (on the left), after the flushing (in the middle), and after 6h of H_2 plasma exposure (on the right). In addition, the size histogram obtained from SEM measurements on a similar (but not the same) sample with virgin Si particles (scanned over an area of 3 mm x 3 mm ) is demonstrated in purple on the left "as-deposited" histogram for comparison. The particle size distribution histograms generated from in-situ laser light scattering (LLS) measurements were derived using the calibration procedure described above. The recorded intensities of Si particles were recalculated into sizes using the black curve from Figure <ref> corresponding to silicon. The uncertainty of the method for these particles is the same as for melamine particles. The blue dashed line indicates the detection limit of the system which depends on n. In fact, the detection limit is determined by the size, at which the constant I_o intersects with the Mie calculation curve. For Si particles with n = 4.15, the detection limit is around 3 µm.
The histogram of as-deposited particles demonstrates the good matching of mean values in the calibrated LLS measurement results compared to size histograms obtained using SEM. The slight deviation in sizes is explained by the fact that SEM measurements were carried out for a similar sample with Si particles, but not the same (to prevent carbonization of particles in SEM and its influence on LLS measurements). It can also be seen from the plot, that after the first flushing a little fraction of the detected particles has been removed with no measurable difference in size distribution. Despite the fact that flushing scales as d^2 and adhesion should scale as d we did not see the removal of bigger particles which can be addressed by the importance of other factors, such as size, shape, and roughness. As was mentioned before: this result indicates that the remaining particles have an adhesion force to the surface that exceeds the shear force exerted by the flushing. As is already shown in Figures <ref> and <ref>, the number of particles decays over the duration of hydrogen plasma exposure, while the histograms in Figure <ref> show that the particle size distribution has striven down and toward smaller sizes (together with the mean value shown as a red dotted line). As soon as a particle size reduces to the one indicated by the blue line (i.e. the detection limit), the particle disappears from the histogram, as it will not be detected anymore, and from the visibility of the script.
Therefore, the reliability of the recognition software has been tested based on 3 types of measurements:
* The stability of the number of particle detections was demonstrated in Figure <ref> for non-disturbed particles (without stressors like flushing or plasma) on a substrate.
* The reliability of the obtained size distribution is shown in Figure <ref>, where the LLS measurements were compared to the SEM data (black bins vs purple bins).
* The average scattering cross-section of a melamine particle using the TIS signal was compared to individually detected particles and demonstrated a good match in Figures <ref> and <ref>. The TIS was treated as an analog signal for changing the scattering efficiency of particles.
The obtained size histograms indicate that the etching mechanism with shrinking particles beyond the detection limit is the dominant mechanism for Si particle interaction with H_2 plasma. As can be seen from the middle and from right histograms, the highest percentage reduction was for the largest particles and the percentage gradually decreased toward the smallest particles. There are two reasons for that: 1) bigger particles shrink and take the place of smaller particles (hence, a relatively constant amount of small particles remained unchanged); 2) etching of Si by chemical sputtering of hydrogen radicals is only possible when accompanied by energetic electrons and ions from plasma breaking Si—Si bonds.<cit.> In that matter, the etching occurs at the place when particles interact with ions; hence, the particles are more to etch from the top rather than from the sides (it has been also demonstrated in AFM measurements<cit.>). It explains why the entire histogram does not strive toward the smaller side as a whole.
§ CONCLUSIONS
The present study demonstrates the application of LLS, combined with long-distance microscopy, to in-situ characterize the response of micrometer-sized silicon particles on a smooth substrate to hydrogen plasma exposure or to a flushing gas jet. The number of particles, particle size distribution, and total scattering intensity (TIS) measured by laser light scattering (LLS) were calibrated with monodisperse melamine resin spheres. The results indicate that the counting accuracy was approximately 3% for 5.26 µm melamine spheres. Furthermore, the observed inconsistency in relating the counting of only the bright pixels to the particle's size was attributed to the blooming effect. Therefore, Mie theory was applied to convert the calibrated particle effective scatter cross-sections to the size equivalent. The accuracy of the LLS size measurement was found to be between 50% for 2.14 µm particles and 10% for 5.26 µm particles.
Surface-deposited Silicon particles were employed for LLS measurements in order to demonstrate the effectiveness of the method to serve as an in-situ diagnostic to visualize the effect of plasma exposure. The effect of plasma on Si particles is complex and may involve particle size and shape evolution due to chemical or physical sputtering. The in-situ measured counting and size evolution proves the etching of Si is dominant when exposed to H_2 plasma. The etching is mostly conducted by hydrogen ions. This is consistent with literature data obtained from SEM measurements. Additionally, SEM measurements conducted on virgin silicon particles demonstrated a high degree of concordance with the size distribution that was calculated using LLS and Mie theory and subsequently plotted.
In conclusion, LLS can be useful as a tool for in-situ measurement of plasma exposure or gas jet flushing, fragmenting, or etching of micrometer-sized particles with a statistical description of adhesion for multiple (100-1000s) particles exposed to the same stressor.
The assistance of P. Sanders, A. B. Schrader, J. T. Kohlhepp, and P. Minten in assembling the setup, as well as ASML in financial and scientific support, is gratefully acknowledged.
|
http://arxiv.org/abs/2307.04753v1 | 20230710175625 | Redshifting galaxies from DESI to JWST CEERS: Correction of biases and uncertainties in quantifying morphology | [
"Si-Yue Yu",
"Cheng Cheng",
"Yue Pan",
"Fengwu Sun",
"Yang A. Li"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany
[email protected]
Chinese Academy of Sciences South America Center for Astronomy, National Astronomical Observatories, CAS, Beijing 100101, People's Republic of China
Department of Astronomy & Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721, USA
Department of Astronomy, School of Physics, Peking University, Beijing 100871, China
Observations of high-redshift galaxies with unprecedented detail have now been rendered possible with the James Webb Space Telescope (JWST). However, accurately quantifying their morphology remains uncertain due to potential biases and uncertainties. To address this issue, we used a sample of 1816 nearby DESI galaxies, with a stellar mass range of 10^9.75 –11.25 M_⊙, to compute artificial images of galaxies of the same mass located at 0.75≤ z≤ 3 and observed at rest-frame optical wavelength in the Cosmic Evolution Early Release Science (CEERS) survey. We analyzed the effects of cosmological redshift on the measurements of Petrosian radius (R_p), half-light radius (R_50), asymmetry (A), concentration (C), axis ratio (q), and Sérsic index (n). Our results show that R_p and R_50, calculated using non-parametric methods, are slightly overestimated due to PSF smoothing, while R_50, q, and n obtained through fitting a Sérsic model does not exhibit significant biases. By incorporating a more accurate noise effect removal procedure, we improve the computation of A over existing methods, which often overestimate, underestimate, or lead to significant scatter of noise contributions. Due to PSF asymmetry, there is a minor overestimation of A for intrinsically symmetric galaxies. However, for intrinsically asymmetric galaxies, PSF smoothing dominates and results in an underestimation of A, an effect that becomes more significant with higher intrinsic A or at lower resolutions. Moreover, PSF smoothing also leads to an underestimation of C, which is notably more pronounced in galaxies with higher intrinsic C or at lower resolutions. We developed functions based on resolution level, defined as R_p/FWHM, for correcting these biases and the associated statistical uncertainties. Applying these corrections, we measured the bias-corrected morphology for the simulated CEERS images and we find that the derived quantities are in good agreement with their intrinsic values – except for A, which is robust only for angularly large galaxies where R_p/ FWHM≥ 5. Our correction functions can be applied to other surveys, offering valuable tools for future studies.
Redshifting galaxies from DESI to JWST CEERS: Correction of biases and uncertainties in quantifying morphology
Si-Yue Yu<ref>Humboldt Postdoctoral Fellow,
Cheng Cheng<ref>,
Yue Pan<ref>,
Fengwu Sun<ref>,
Yang A. Li<ref>
Version of June 20, 2023
======================================================================================================================================================================================
§ INTRODUCTION
Galaxy morphology has been traditionally described in a qualitative way using the Hubble sequence <cit.>, which is widely recognized as a fundamental aspect in the study of galaxy formation and evolution. With its unprecedented sensitivity and resolution in the infrared, the James Webb Space Telescope (JWST) is making significant advances in our understanding of the origin of the Hubble sequence. Previous studies using the Hubble Space Telescope (HST) suggested that the majority of galaxies at z>2 are peculiar <cit.>. However, early JWST studies reveal a large fraction of regular disk galaxies at high redshift <cit.>. These high-redshift disk galaxies can have spiral arms and bars similar to those in Local Universe <cit.>. Galaxies with established disk and spheroidal morphologies span the full redshift range <cit.> and the Hubble Sequence was already in place as early as z≈ 6 <cit.>.
Quantifying morphology is crucial for exploring galaxy evolution and it can be achieved using both non-parametric and parametric methods <cit.>. The Cosmic Evolution Early Release Science (CEERS) survey (PI: Finkelstein, ID=1345, ; ) is an early release science program in Cycle 1 that observes the Extended Growth Strip field (EGS) of the Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey <cit.>. The CEERS will observe ten pointings with the Near-Infrared Camera <cit.>, covering a total of 100 sq. arcmin. By quantifying morphology of galaxies from the first four pointings, <cit.> and <cit.> show that spheroids exhibit a higher average Sérsic index, smaller size, and rounder shape compared to disks and peculiars, although selection effects may exist. Additionally, the average Sérsic index decreases with increasing redshift <cit.>. Despite the slightly higher average concentration in spheroids and slightly higher average asymmetry in peculiars, the concentration-asymmetry diagram does not provide a clear separation of galaxies according to their morphological types <cit.>.
However, these early results may not accurately reflect the intrinsic galaxy morphology as the physical resolution and signal-to-noise ratio (S/N) of the JWST images of high-redshift galaxies are limited. Furthermore, comparing galaxy morphologies at different redshifts and/or observed by different instruments is challenging as changes in resolution, noise level, and rest-frame wavelength can alter the morphology and introduce biases and uncertainties in the quantification.
A commonly used strategy for understanding measurement biases and uncertainties caused by image degradation is to use high-quality images of low-redshift galaxies to generate simulated images of high-redshift galaxies and, subsequently, to compare the measurements before and after the image simulation. The pioneering work in this area was done by <cit.>. This strategy has been used to study spiral structure <cit.>, bar structure <cit.>, concentration-asymmetry-smoothness statistic <cit.>, Gini-M_20 statistic <cit.>, and Sérsic index and size <cit.> of high-redshift galaxies observed by HST. Despite the widespread use of this technique, their image simulation procedures did not take into account the intrinsic galaxy size evolution and cannot be directly applied to galaxies with high redshifts. It has been found that there is a strong redshift evolution in galaxy size <cit.>. By using data from CANDELS, <cit.> show that galaxies of a given stellar mass are on average smaller at higher redshifts, with fast evolution for early-type galaxies and moderate evolution for late-type galaxies.
Building on prior studies and by taking into account all relevant factors of image simulation, our goal is to use high-resolution and high-S/N nearby galaxy images to generate artificial images of galaxies located at redshift 0.75≤ z≤3 and observed at optical rest-frame wavelength in JWST CEERS. We then go on to investigate biases and uncertainties in the quantification of galaxy morphology, focusing on six key morphological quantities: Petrosian radius, half-light radius, asymmetry, concentration, Sérsic index, and axis ratio, which are commonly used to describe the typical morphology of a galaxy. We aim to derive corrections to these biases and uncertainties to improve the accuracy and robustness of future galaxy morphology studies.
This paper is organized as follows. Section <ref> outlines our sample selection of nearby galaxies and the data reduction process. Section <ref> provides a detailed description of the image simulation methodology. Section <ref> discusses the biases and uncertainties, and derive correction functions. Section <ref> validates the effectiveness of the correction functions. Finally, a summary of the main findings is presented in Sect. <ref>. Throughout this work, we use AB magnitudes and assume the following cosmological parameters: (Ω_ M, Ω_Λ, h)=(0.27, 0.73, 0.70).
§ OBSERVATIONAL MATERIAL
§.§ Sample of nearby galaxies
We restricted the redshift range of our image simulations to z=0.75, 1.0, 1.25, ..., and 3.0. We refrain from simulating images of galaxies at z>3, as the of evolution galaxy optical size at these redshifts is not yet well constrained. For each target redshift, we chose the filter that observes the rest-frame optical wavelength (λ=5000–7000 Å). Specifically, we used the F115W filter for z=0.75 and 1, the F150W filter for z=1.25, 1.5, and 1.75, and the F200W filter for z=2.0 to 3.0. To generate artificially redshifted galaxy images, nearby (z≈0) galaxy images observed in a similar rest-frame wavelength range are required. This range is covered by the g, r, and z bands (with effective wavelengths: 4796 Å, 6382 Å, and 9108 Å) provided by the Dark Energy Spectroscopic Instrument (DESI) Legacy Imaging Surveys[http://legacysurvey.org/] <cit.>.
The DESI Legacy Imaging Surveys are comprised of three public projects: the Beijing-Arizona Sky Survey <cit.>, the Mayall z-band Legacy Survey <cit.>, and the Dark Energy Camera Legacy Survey <cit.>. We focus on the DECaLS g-, r, and z-band images, while the MzLS z-band nearby galaxy images are found to be significantly affected by pattern noise and are therefore excluded from our analysis along with their corresponding BASS g- and r-band images. We defined our sample of nearby galaxies using the Siena Galaxy Atlas (SGA)[https://www.legacysurvey.org/sga/sga2020/], which is constructed based on the DESI and includes 383,620 galaxies. The SGA does not include small objects with D_25<20 arcsec, where D_25 is the B-band isophotal diameter at μ_B=25 mag arcsec^-2, as a high fraction of them are spurious sources.
We selected galaxies with available Hubble type classification and best-estimated luminosity distance (D_L) from HyperLeda extragalactic database[http://leda.univ-lyon1.fr/] <cit.>. The best-estimated distance is weighted average between distance derived from spectroscopic redshift and published redshift-independent distances, and it provide an homogeneous distance estimate over the whole redshift range <cit.>. We select galaxies with 12.88≤ D_L ≤ 65.01 Mpc, corresponding to cosmological z= 0.003–0.015. We exclude galaxies in the Galactic plane (-20≤ b≤ 20) to avoid any possible severe photometric problems caused by crowd foreground stars. We excluded galaxies that are covered by masks of nearby bright sources provided by the SGA, as the emission from these galaxies is severely contaminated by the bright sources. This process will remove merging systems where the centers of two galaxies are close but have not yet merged into one. The SGA catalog also includes mid-infrared photometry from the Wide-field Infrared Survey Explorer <cit.>. We use the DECaLS g-, r-, and z-band flux and WISE W1 flux to estimate stellar mass (M_⋆) through SED fitting using CIGALE[https://cigale.lam.fr/; we use 2022 version] <cit.>, assuming the Chabrier stellar initial mass function <cit.>, double exponential star formation history, simple stellar population of <cit.>, and attenuation law of <cit.>. We selected galaxies with M_⋆ of 10^9.75 –11.25 M_⊙. The mass cut was applied because we use the galaxy size evolution derived by <cit.>, which is only available in this mass range for both early- and late-type galaxies. Our final sample of nearby galaxies consists of 1816 galaxies.
Figure <ref> summarizes some of the basic parameters of the sample. Most of the galaxies are nearby (median D_L=52.2 Mpc; Fig. <ref>(a)), luminous (median M_r=-20.4 mag, corrected for Galactic extinction; Fig. <ref>(b)), massive (median M_⋆=10^10.2 M_⊙; Fig. <ref>(c)), and angularly large (median D_25=1.4 arcmin; Fig. <ref>(d)). The sample spans the full range of Hubble types in the nearby Universe (Fig. <ref>(e)), comprising 600 (33%) early-type galaxies (ellipticals, S0, or S0/a), and 1216 (67%) late-type galaxies (Sa–Irr). We acquired g-, r-, and z-band mosaic images and point spread functions (PSFs) from SGA. The Galactic extinction is corrected using the map of dust reddening <cit.>. We use the galaxy center, ellipticity, and position angle provided by the SGA catalog when removing foreground stars (as described in Sect. <ref>). We set R_25=D_25/2. We measured the sky background and noise using AutoProf[https://autoprof.readthedocs.io/] <cit.>. The sky background is then subtracted from the image.
§.§ Removal of foreground stars
Contamination from sources other than the target galaxy, such as foreground stars and projected close galaxies, should be removed or minimized prior to image simulation. This process was not done in some previous studies on simulating high-redshift galaxy images observed by HST (e.g., ; , but also see ), rendering their results less robust. We first mask out the contamination. For each galaxy image, the SGA provides a catalog of sources that are identified using The Tractor[https://github.com/dstndstn/tractor/], a forward-modeling approach to performing source extraction and model fitting to the sources. We masked out each identified star, using a mask size determined by the star-centered radius at which the r-band flux start to fluctuate due to galaxy structure or background noise. We then masked out each identified projected nearby galaxy using an ellipse with semi-major axis of 1.5 R_25. Finally, we performed a visual inspection and manually masked out any residual stars or small background galaxies that were missed in the above process. We adopted the same mask for g-, r-, and z-band images.
We removed contamination through the following steps. The masked regions outside the galaxy (R>1.5 R_25) are set to zero. For small masked regions (area < 50 arcsec^2) inside the galaxy (R≤ 1.5 R_25), we estimated the intrinsic galaxy light affected by foreground stars and/or projected close galaxies using interpolation. For each masked region, we cut out a square region containing twice as many unmasked pixels as masked pixels. The interpolation was then performed by approximating the values of the unmasked pixels by a polynomial function. We used the interpolated values to fill in the small masked regions.
For large masked regions (area ≥ 50 arcsec^2) inside the galaxy, interpolation may fail to reproduce the intrinsic galaxy light, as the complex galaxy structures may cause overfitting and lead to catastrophic results. Instead, we replaced the masked region with values from their 180 rotational symmetric pixels if the symmetric portion did not have a large masked region. The rotational symmetric images were originally used to highlight spiral structure <cit.>. We caution that this approach may not restore the flux in prominent three-armed structures, which are 120 rotational symmetric, but this effect is small since the fraction of three-armed structures is small <cit.>. In cases where the 180-rotational symmetric portion also has a large masked region or the galaxy is highly inclined (i>70), we filled in the masked region with values from their mirror-symmetric pixels reflected over the galaxy major axis. In a few instances where neither of the above criteria are satisfied, we reverted to the interpolation method and used a low-order polynomial function to perform fitting and interpolation. Finally, we added Poisson noise to the cleaned regions to simulate real observations.
The above process makes use of galaxy symmetry, but it does not significantly affect the calculation of galaxy asymmetry described in Sect. <ref>, as the masked regions are small relative to the galaxy size. We have to sacrificed a small degree of precision in computing the galaxy asymmetry to generate cleaned images for image simulation.
The removal of foreground stars is done for each galaxy to generate their star-cleaned g-, r-, and z-band images. The effectiveness of the cleaning is illustrated in Fig. <ref>, showing the r-band images of two galaxies, ESO 121-026 (a Sbc galaxy) and ESO 251-004 (an elliptical galaxy), before and after undergoing the removal process. Our process successfully eliminates the vast majority of contamination surrounding the galaxies, revealing clearer and more accurate images of the galaxy structures. The median PSF FWHM of DECaLS images is ∼ 1.3, 1.2, and 1.1 arcsec in the g, r, and z bands, respectively <cit.>. To facilitate the pixel-by-pixel K correction described in Sect. <ref>, we matched the g-, r-, and z-band images to a common PSF for each galaxy. We did not use Fourier transformation to find a convolution kernel, because this would introduce significant high frequency noise, as the PSF FWHMs at different bands are very close. Instead, the matching was done by searching for a kernel of Moffat function, which is convolved with the PSF of smaller FWHM to get a broadened PSF that has almost the same best-fitted Moffat function with the PSF of larger FWHM. The star-cleaned g-, r-, and z-band images with a common PSF were used to compute artificial high-redshift galaxy images observed in JWST CEERS.
§ ARTIFICIALLY REDSHIFTING GALAXIES
The nearby DESI galaxy images are of fairly high quality, which allows for accurate determinations of morphological measurements. Compared to DESI images, JWST CEERS images of high-redshift galaxies have lower physical resolution and lower S/N, as the galaxies become less resolved and fainter at higher redshifts. The limited data quality may bias measurements and make them more uncertain. To understand the how well we can quantify galaxy morphology using the JWST NIRCam images, we simulated nearby DESI galaxies with respect to how they would appear at various high redshifts, as observed in JWST CEERS. Section <ref> presents the derivation of the formulae used in the redshifting procedure. Subsequently, Sections <ref> and <ref> respectively discuss the evolution of galaxy size and luminosity. The redshifting procedure is outlined in Sect. <ref>.
§.§ Formulae
We begin by assuming the existence of an extended source located at redshift, z_ local. Its luminosity distance and flux density we observed are denoted as D_L(z_ local) and f_ local, respectively. We adopted erg s^-1 cm^-2 Hz^-1 as the unit of f because we use AB magnitude. The energy emitted by the source in 1 second at a narrow range of wavelength from λ-Δλ/2 to λ+Δλ/2 is given by:
E_ local = 4π·c/λ^2· f_ local· D_L^2(z_ local)·Δλ,
where c is the light speed. The source is manually moved to a higher redshift of z_ high with the luminosity distance of D_L(z_ high). The observed flux density is denoted as f_ high. The observed wavelength becomes λ^' = λ· (1+z_ high)/(1+z_ local) and the wavelength width becomes Δλ^' = Δλ· (1+z_ high)/(1+z_ local). The energy emitted by the source in 1 second at a narrow range of wavelength from λ^'-Δλ^'/2 to λ^'+Δλ^'/2 is:
E_ high = 4π·c/λ^' 2· f_ high· D_L^2(z_ high)·Δλ^'.
We assume that the source becomes intrinsically brighter at z_ high, with the energy emitted given by:
E_ high = (1+z_ high/1+z_ local)^α· E_ local.
Combining Eq. (<ref>), (<ref>), and (<ref>), we obtain:
f_ high = D^2_L(z_ local)/D^2_L(z_ high)·1+z_ high/1+z_ local·(1+z_ high/1+z_ local)^α· f_ local.
The first term on the right-hand side of the equation is caused by the distance effect and the second term is caused by the cosmological compression of the frequency (or, equivalently, the cosmological dilation of the wavelength). The second term occurs as we consider monochromatic luminosity, while it would disappear if we considered bolometric luminosity. The third term is caused by the assumed luminosity evolution. Equation (<ref>) is used to rescale the flux in the redshifting procedure (Sect. <ref>). Hence, we can relate the absolute magnitude of the source at local and high redshift using:
M_ high = - 2.5log( 1+z_ high/1+z_ local)^α + M_ local.
We assume the source has a physical radius of R_ local at z_ local and R_ high at z_ high. The solid angle it spans at z_ local and z_ high is:
Ω_ local=π R_ local^2/D^2_ ang(z_ local)
,
and
Ω_ high=π R_ high^2/D^2_ ang(z_ high),
respectively. D_ ang(z) is the angular-diameter distance at redshift z. We assume the physical size becomes smaller at higher redshift:
R_ high = (1+z_ high/1+z_ local)^β· R_ local.
Combining Eq. (<ref>), (<ref>), and (<ref>), we obtain:
Ω_ high = D^2_ ang(z_ local) /D^2_ ang(z_ high)·( 1+z_ high/1+z_ local)^2 β·Ω_ local
.
We denote the pixel scale as p and number of pixels occupied by the source as N. We therefore have:
N_ high/N_ local = D^2_ ang(z_ local) /D^2_ ang(z_ high)·( 1+z_ high/1+z_ local)^2 β·p^2_ local/p^2_ high,
which is the binning factor we use to downscale image size in the redshifting procedure (Sect. <ref>). The flux density per unit solid angle can be written as:
f_ high/Ω_ high = ( 1+z_ high/1+z_ local)^-3·(1+z_ high/1+z_ local)^α·( 1+z_ high/1+z_ local)^-2 β·f_ local/Ω_ local
.
Denoting the surface brightness as μ, we have:
μ_ high = μ_ local + 2.5log( 1+z_ high/1+z_ local)^3
-2.5 log( 1+z_ high/1+z_ local)^α
-2.5 log( 1+z_ high/1+z_ local)^-2 β.
If we adopt z_ local=0 and z_ high=z, we have:
μ_z = μ_0 + 2.5log( 1+z )^3 - 2.5 log( 1+z )^α-2β.
The term of 2.5log( 1+z )^3 is the well-known cosmological dimming, while the term of 2.5 log( 1+z )^α-2β describes the evolution of surface brightness. In the redshifting procedure, we assumed that the average flux density observed in a specific filter is consistent with the flux density at the effective wavelength of the filter, so that we can ignore the difference between bandpass width of the DESI filter and JWST filter.
§.§ Size evolution
Galaxies at a fixed mass are physically smaller at higher redshift <cit.>. Using structural parameters derived from CANDELS imaging <cit.>, <cit.> found a significantly different rate (β) of average size evolution for early-type and late-type galaxies. The average effective radius of early-type galaxies, calculated over a range of stellar masses, evolves rapidly, following R_ eff∝(1+z)^β=-1.48, while that of late-type galaxies evolves moderately, following R_ eff∝(1+z)^β=-0.75. Therefore, when simulating high-redshift galaxies, it is important to properly account for intrinsic size evolution; otherwise, the simulated galaxies would be larger than the true galaxies of the same mass.
The evolution rate is dependent on stellar mass, with more massive galaxies showing a higher rate (more negative β), as reported in <cit.>. The dependence of β on mass is weak for late-type galaxies, but strong for early-type galaxies, and less massive early-type galaxies evolve at a similar rate to late-type galaxies of the same mass. We adopted the β measured by <cit.>, as given in their Table 2) We focused on galaxies with a stellar mass of 10^9.75 –11.25 M_⊙, so that β is available for both late-type and early-type galaxies. To estimate β for our nearby DESI galaxies, we fit two polynomials to the β as a function of stellar mass for late-type and early-type galaxies, respectively. These best-fit functions were used to compute β using stellar mass and galaxy type as input. The estimated β was used for generating artificially redshifted galaxy images (see Sect. <ref>). Additionally, galaxies tend to be larger at bluer wavelength than at redder wavelength <cit.>. We performed a pixel-by-pixel K correction in Sect. <ref> to correct for this effect.
The rate of size evolution may differ when using different definitions of galaxy size. As shown by <cit.>, the outer radius of galaxies evolves at a faster rate than the inner radius, suggesting inside-out growth. While we do not consider the effect of inside-out growth in this study, we aim to compare our simulated images with real observations to investigate this effect in the future.
§.§ Monochromatic luminosity evolution
The intrinsic galaxy surface brightness has been found to brighten on average with increasing redshift <cit.>. By definition, the evolution of surface brightness is partly attributed to intrinsic size evolution and partly attributed to monochromatic luminosity evolution. While the intrinsic size evolution is discussed in Sect. <ref>, in
this section we study the monochromatic luminosity evolution by assuming a function form of L_λ∝ (1+z)^α, where L_λ is the monochromatic luminosity at rest-frame λ filter with the effect of cosmological dimming corrected.
To stay consistent with <cit.>, we used the rest-frame flux, magnitude, color index, and redshift from the 3D-HST catalog <cit.>. Following the strategy in <cit.>, we selected CANDELS galaxies with F160W apparent magnitude brighter than 25.5, a flag for good model fittings, and stellar mass above the mass of completeness limit at each redshift range. We classified the selected CANDELS galaxies into early-type and late-type galaxies using the demarcation lines proposed by <cit.> in the diagram of U-V versus V-J color index. We separated the early-type and late-type galaxies to various bins of redshift (Δ z=0.5) and stellar mass (Δlog M_*=0.5 dex), as used in <cit.>. Then we calculated the median rest-frame absolute magnitude M_λ, where λ denotes U, V, B, R, I, or J filter. We fit the following function:
M_λ=M_λ,0 - 2.5 log(1+z)^α,
to the M_λ as a function of z for each bin of stellar mass viewed at each filter to determine α. The results are shown in Figs. <ref> and <ref>. We then fit two 2D polynomials to the measured α as a function of filter effective wavelength and stellar mass for early-type and late-type galaxies, respectively. Figures <ref> and <ref> show the best-fit 2D functions. The color bar encodes best-fit α for a given wavelength and mass. Both early-type and late-type galaxies get brighter on average at higher redshifts across all wavebands. The rate of evolution is more rapid in bluer wavebands than in redder ones, owing to more intense star formation in the past <cit.>. Interestingly, the low-mass late-type galaxies have particularly high α at blue wavelength. Our results are consistent with the evolution of the characteristic magnitude of luminosity function, where the characteristic magnitude becomes brighter at higher redshifts and evolves faster in the UV than in the V band. <cit.>. We used the rest-frame wavelength, galaxy type, and stellar mass as inputs for estimate α for our nearby DESI galaxies by using the best-fit polynomials.
§.§ Redshifting procedure
Our redshifting procedure is based on the method of <cit.>, which we updated by incorporating the K correction, galaxy size evolution, and luminosity evolution. This approach ensures that the artificially redshifted galaxy images closely match the size and brightness of true galaxies viewed at high redshifts. We compute artificial images of galaxies at z=0.75, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, and 3.0, using the F115W filter for z=0.75 and 1, the F150W filter for z=1.25, 1.5, and 1.75, as well as the F200W filter for z=2.0 to 3.0. To determine the typical noise level for the simulated images, we use the science data, error map, and source mask from the CEERS Data Release Version 0.5 provided by <cit.>. In order to simulate background noise at each filter, we cut out a patch with a few sources from the CEERS science data, and then replaced these sources with background regions next to the sources to get a clean fragment of a real background image. We selected 80 galaxies, consisting of the top 20 brightest galaxies from each of the four pointings, to calculate a median ratio of galaxy flux variance to galaxy flux, used to estimate the noise from galaxy flux. This quantity depends on filter, exposure time, sky brightness, and system throughput, and varies only slightly from location to location in CEERS. We adopted a pixel scale of 0.03 arcsec/pixel, same as in <cit.>. We generated a two-time oversampling PSFs at the F115W, F150W, and F200W filters using WebbPSF <cit.>. The two-time oversampling PSFs were used because the F115W and F150W PSFs on the pixel scale of 0.03 arcsec/pixel were undersampled.
Our Python algorithm for generating artificially redshifted galaxy images is summarized in the following steps:
(1) We downscaled the size of g-, r-, and z-band DESI images so that the PSF occupy two pixels while preserving their total flux. This is meant to reduce the computation time of step 2, while retaining Nyquist sampling;
(2) For each pixel with S/N ≥3, we performed a K correction using the python code developed by <cit.>[https://kcorrect.readthedocs.io/; version 5.0.0 is used.] to calculate the expected flux at the rest-frame wavelength through interpolation. Later, a median ratio of the rest-frame flux derived above to the DESI image flux was calculated and multiplied with the value of pixels with S/N <3 to obtain a K-corrected DESI image;
(3) We rescaled the flux using three factors (Eq. <ref>). The first one is a dimming factor, caused by the longer luminosity distance at higher redshift. The second one is a brightening factor, caused by the cosmological dilation of wavelength. The first two factors lead to the well-known cosmological dimming. The third factor comes from the monochromatic luminosity evolution, such that the monochromatic luminosity is scaled with (1+z)^α;
(4) We downscaled the image size to match the half pixel size and the galaxy angular size as if it appears at high redshifts, while preserving the total flux. In addition to the change of angular size due to longer angular-diameter distances, we took into account the intrinsic galaxy size evolution that is scaled with the physical size (1+z)^β;
(5) We used photutils <cit.> to calculate, by Fourier transformation, a kernel that transforms input PSF to the two-time oversampling PSF. Since the input PSF is much smaller, no high frequency noise occurs. The image is convolved with the kernel and was then downscaled in size by 50% to match the target pixel scale of 0.03 arcsec/pixel to obtain a resolution-matched image;
(6) We calculated the variance map by multiplying the galaxy flux with the median ratio of variance to flux, generated noise using the map, and added the resulting flux noise to the image. Next, we overlaid the resolution-matched image on top of the clean real background image to produce the final simulated CEERS image.
We performed the six steps above for each galaxy to generate its simulated CEERS images at high redshifts. As outlined in step 2, we obtained rest-frame images by interpolating multi-band data at the pixel level, avoiding extrapolation to minimize the risk of introducing significant errors in output flux. To evaluate the uncertainty of this process, we performed a test by using 100 time bootstrap resamplings for one galaxy, which resulted in 100 more simulated images. We found that the resulting uncertainty in the output flux is small, considerably smaller than the typical background noise and galaxy flux noise. Therefore, this source of uncertainty has negligible impact on our results <cit.>. Figure <ref> illustrates artificially redshifted galaxy images of three example galaxies. The first column shows the DESI r-band image, while the second to forth columns illustrate the artificial JWST CEERS images at z=1, z=2, and z=3. The final column plots real images of three CEERS galaxies, each with stellar masses comparable to the galaxies in the same row. Although some small-scale structures are suppressed by noise or PSF smoothing effects, the galactic-scale structures such as strong bars and prominent spirals are still present. Overall, the galaxies remain visible across all redshifts, although they become noisy and blurry. Compared to the simulated CEERS images, real CEERS images of disk looks clumpier. A detailed and rigorous comparison between them is crucial to study the evolution of structure in the future.
§ CORRECTION OF BIASES AND UNCERTAINTIES
In this section, we quantify the measurement biases and uncertainties for six commonly used morphological quantities: Petrosian radius, half-light radius, concentration, asymmetry, Sérsic index, and axis ratio. The high-quality, K-corrected DESI images enable us to accurately determine the intrinsic morphological measurements. We used the z=2.0 high-quality K-corrected DESI images as the training set to derive functions for correcting the biases and uncertainties arising from the resolution effects present in the measurements obtained from degraded images. Using K-corrected images at other redshifts yields nearly identical results, as the only difference lies in the slight variation in observed rest-frame wavelengths. We reduced their image resolution so that the intrinsic Petrosian radius is N times the PSF FWHM. We then performed measurements and compared the resulting values with their intrinsic counterparts. We adopted exponentially-growing values of N, which are 1.98, 3, 4.55, 6.89, 10.45, 15.83, and 24. The FWHM values for the F115W, F150W, and F200W PSFs are 0.037, 0.049, and 0.064 arcsec, respectively. These images are denoted as N-FWHM images. The typical CEERS noise predominantly impacts the computation of asymmetry. We used the z=3 simulated CEERS images, which are the noisiest in our dataset of simulated galaxies, to improve the method for removing noise contribution from the computation of asymmetry.
§.§ Galaxy size
We estimated the flux-weighted center and apparent projection parameters for each image, and measure the Petrosian radius <cit.>, defined as the radius at which the surface brightness is 20% of the average surface brightness within R_p. We re-measured the center by minimizing galaxy asymmetry (see Section <ref>) and used it to re-measure R_p as our final measurement; R_p encompasses at least 99% of the light within a given galaxy <cit.>.
The half-light radius is another indicator of galaxy size, used to study galaxy evolution <cit.>. By measuring the total flux within an elliptical aperture with a radius of 1.5 R_p, we derived the fraction of light that the radius enclose as a function of radius, known as the curve of growth (cog). We then determine R_20, R_50^ cog, and R_80, the radius containing 20%, 50%, and 80% of the total galaxy light, respectively. In additional to the non-parametric approach, we fit a Sérsic model to the galaxy to obtain the half-light radius, denoted as R_50^ fit (see Sect. <ref>).
We defined the level of image resolution as R_p, True/FWHM. We measured R_p, R_50^ cog, and R_50^ fit on the F200W N-FWHM images and plot the differences between them and their intrinsic values (R_p, True, R_50, True^ cog, and R_50, True^ fit) as a function of R_p, True/FWHM in the first row of Fig. <ref>. The mean value of (R_p - R_p, True)/FWHM, denoted by Δ_p, is 0.770, and that of (R_50^ cog - R_ 50, True^ cog)/FWHM, denoted by Δ^ cog_50, is 0.352. Thus, R_p and R_50^ cog are systematically slightly overestimated due to the lower resolution, and the biases should be corrected. Interestingly, the mean difference does not significantly depends on R_p, True/FWHM. The detected bias in measuring R_p is consistent with <cit.>, who show that the measured R_p before and after image blurring correlate well with slop of ∼ 1 and with a systematic offset. In contrast, the mean value of (R_50^ fit - R_ 50, True^ fit)/FWHM, denoted by Δ^ fit_50, is so small: -0.084, indicating that the Sérsic fitting can extract the half-light radius without statistical bias, and no correction is needed.
The measurements of R_p and R_50^ cog are corrected for the biases using the functions:
R_p, / FWHM = R_p/ FWHM - Δ_p,
and
R^ cog_50, / FWHM = R^ cog_50/ FWHM - Δ_50,
respectively. In Table <ref>, we list Δ_p and Δ_50 for all three filters. The second row of Fig. <ref> displays the correlation between the bias-corrected sizes and the intrinsic sizes. No correction was done for R_50^ fit. The data points lie closely around the one-to-one relation, marked by a dashed line, indicating that no obvious residual biases exist.
Fractional uncertainty of the size measurement would be more meaningful than the absolute uncertainty, as the measured size is larger in larger galaxies. We calculated the fraction uncertainty as the standard deviation (σ) of difference between measured and intrinsic values divided by the intrinsic values. We plot the fractional uncertainty as a function of R_p, True/FWHM in the third row of Fig. <ref>. The size measurement becomes more uncertain at lower resolutions. The following exponential functions, respectively, were fitted to the data:
δ_R_p/R_p = ϝ_1^p·exp( ϝ_2^p· x), where x = R_p, True/ FWHM,
δ_R^ cog_50/R^ cog_50 = ϝ_1^ 50, cog·exp( ϝ_2^ 50, cog· x), where x = R_p, True/ FWHM,
and
δ_R^ fit_50/R^ fit_50 = ϝ_1^50, fit·exp( ϝ_2^50, fit· x), where x = R_p, True/ FWHM.
The solid curve in each bottom panel marks the best-fit function. The best-fit parameters for the results based on F115W, F150W, and F200W PSF are listed in Table <ref>.
The functions used to fit the data, as the above functions and those in the rest of this paper, are chosen so that the function can fit the data without obvious residuals and simultaneously meet our expectations regarding the y-axis values when R_p is sufficiently small or large. These functions are empirical. Our primary objective is not to develop a deep understanding of the underlying physics, but rather to describe and characterize the data itself. Empirical functions are used to create a smooth curve that highlights trends or patterns in the data and then used to correct biases and uncertainties, even if the curve itself does not directly represent any physical reasons.
Although our primary focus is on studying the optical morphology of simulated galaxies at z≤3 observed with filters in the 1.15–2.0 μ m range, our data set can also shed light on the optical morphology of galaxies at higher redshifts when observed with redder filters, such as F277W, F356W, and F444W. We used a pixel scale of 0.06 arcsec, created PSFs using WebbPSF, and generated N-FWHM images to carry out the same analysis to comprehend biases and uncertainties involved in quantifying morphology observed through these redder filters. The FWHM values for the F277W, F356W, and F444W PSFs are 0.088, 0.114, and 0.140 arcsec, respectively. However, we did not perform a Sérsic fitting, as the parameters derived from the fitting exhibited no biases. The parameters for deriving the correction functions are shown in Table <ref>.
§.§ Concentration
The concentration (C) measures the degree to which a galaxy's light distribution is centrally concentrated. Following the definition in <cit.>, we compute C as
C=5·log_10(R_80/R_20).
Higher values indicate more centralized light distributions. We denote intrinsic concentration measured in the K-corrected DESI image as C_ True. In the first row of Fig. <ref>, we plot C measured from the N-FWHM images against C_ True. The first, second, third, and final columns display the results of image resolution levels R_p, True/ FWHM=1.98, 4.55, 10.45, and 24, respectively. The results show that C is systematically underestimated, with a more significant underestimation for galaxies with higher C_ True. This effect is more pronounced in angularly smaller galaxies with lower resolution (lower R_p, True/FWHM). The correlations between C and C_ True are nearly linear and, hence, we fit the data with a linear function:
C = k · C_ True + b
.
The best-fit parameters are listed at the top of each panel in the first row and are plotted as a function of R_p, True/FWHM in the bottom row. Toward lower resolutions, the best-fit slop k decreases, while the best-fit intercept b increases. The underestimation of measured C is attributed to the PSF smoothing effect, which overestimates R_20 more than R_80. Our findings are consistent with <cit.> and <cit.>, who showed that C values are more underestimated at higher redshift, where galaxies are smaller and resolutions are lower. Nevertheless, the literature still lacks an accurate correction function to address the issue, which will be further explored in the following discussion.
We use the best-fit straight lines to correct for the bias and uncertainty in measuring C. The correction function is given by:
C_ cor = (C - b)/k.
The correlations between C_ cor and C_ True are shown in the middle row of Fig. <ref>. The mean value (Δ_C) and standard deviation (σ_C) of the difference between C_ cor and C_ True are presented at the top of each panel in the middle row. Also, Δ_C are zero at all examined image resolution levels. σ_C serves as the statistical measurement uncertainty, which increases with lower resolution. The large uncertainty at low resolution stems from the flatten relationship between C and C_ Truen or, in other words, the low k value. σ_C as a function of R_p, True/FWHM is plotted in the bottom row.
In practical applications, we observe galaxies of various sizes, which necessitates the development of functions to derive the k and b for a given resolution level. To achieve this goal, we fit k versus R_p, True/FWHM using the following function:
k = -2·(x+1)/1+(x+1)^g + 1, where x = R_p, True/ FWHM.
This function converges to 1 when R_p, True is sufficiently large. We fit b versus R_p, True/FWHM using the following function:
b = ξ_1^ b·(x+ξ_2^ b)^ ξ_3^ b, where x = R_p, True/ FWHM.
This function converges to 0 when R_p, True is sufficiently large.
Therefore, for a given R_p, True, the correction function can be derived using Eq. (<ref>), (<ref>), and (<ref>).
To estimate the statistical uncertainty for a given resolution level, we fit σ_C versus R_p, True/FWHM using the following function:
σ_C = η_1^C · x^ η_2^C, where x = R_p, True/ FWHM.
The best-fit functions are plotted as solid curves. The best-fit parameters g, ξ_1^ b, ξ_2^ b, ξ_3^ b, η_1^C, and η_2^C for the results based on F115W, F150W, and F200W PSF are listed in Table <ref>. Those based on F277W, F356W, and F444W are provided in Table <ref>.
§.§ Asymmetry
§.§.§ Definition
The asymmetry (A) measures the degree to which a galaxy's light distribution is 180rotationally symmetric. Originally, A is determined by rotating the galaxy image by 180 about the galaxy center, set as the pixel location of the maximum value, and subtracting the rotated image from the galaxy image to obtain a difference image. Then, A is calculated as 0.5 times the ratio of the summation of the absolute pixel values in the difference image to the summation of the absolute pixel values in the original image <cit.>. The background asymmetry is calculated in the same vein for a portion of sky and subtracted. To solve the problem of substantial variation in A due to uncertainty in the center determination, the algorithm is improved by including a process for searching for a new galaxy center to minimize the asymmetry; background asymmetry is also minimized in the same vein and then subtracted <cit.>. In addition, the factor of 0.5 in the original definition is dropped. This is the most conventional method to calculate galaxy asymmetry. The calculation is given by:
A_ C00 = min(∑ |I_0-I_180|)/∑ |I_0| - min(∑ |B_0-B_180|)/∑ |I_0|
= A_ noisy - A_ bkg, min,
where I represents the galaxy image, and B represents the sky background. The subscript of A_ C00 marks <cit.>. The summation is done over all pixels within a 1.5 R_p elliptical aperture, which has the apparent projection parameters determined in Sect. <ref>, centered on the galaxy.
Although the minimization of background asymmetry has been proposed for two decades, there are some studies stick to the background asymmetry without minimization, calculated as follows:
A_ bkg, no min = ∑ |B_0-B_180|/∑ |I_0|.
For example, <cit.> developed the PYTHON package statmorph to quantify galaxy morphology and calculate background asymmetry without minimization within a sky box located beside the galaxy segmentation. Similarly, <cit.> constructed a 10-pixel by 10-pixel grid over the image area outsides the galaxy segmentation to perform the calculation.
However, it has been already shown that A_ bkg, min overestimates the contribution of noise to the calculation of galaxy asymmetry <cit.>, and so the A_ bkg, no min even more severely overestimates the noise contribution. <cit.> showed that asymmetry values of galaxies in the field of shallower observations are systematically lower than those of the same galaxies in the field of deeper observations. These authors proposed measuring the distribution of noise asymmetries in randomly selected regions surrounding the target galaxy and calculate the 15% probability low-end tail as the final background asymmetry measurement. Although this method could remove the bias on average, but significant scatter still persists.<cit.> studied asymmetries of galaxies before and after making them noisy and proposed a new noise correction, which is given by:
A = min(∑|I_0-I_180|) - F_2·min(∑|B_0-B_180|) /∑|I_0| - F_1·∑|B_0|,
where
F_1 = N_I_0 < f_1·σ_ bkg/N_ all
,
and
F_2 = N_|I_0-I_180| < f_2·σ_ bkg/N_ all,
where N_ all represents the number of pixels encloses by the 1.5 R_p elliptical aperture. N_I_0 < f_1·σ_ bkg is the number of pixels dominated by noise in the galaxy image, selected as those with values less than f_1·σ_ bkg; N_|I_0-I_180| < f_2·σ_ bkg is the number of pixels dominated by noise in the difference image, selected as those with values less than f_2·σ_ bkg. This correction is based on the fact that only noisy pixels are affected. <cit.> suggested using f_1=1 and f_2=√(2), which, however, are not the optimal choices, as discussed below.
§.§.§ Improved noise correction
To understand the noise contribution, we measured the asymmetry for the resolution-match images (simulated noise not added yet), which are obtained from step 5 of the redshifting procedure and have such high S/N that the noise contribution is almost negligible. We denote the result as A_ noise-free. We note that A_ noise-free is still biased due to resolution effects and will be addressed later. We then measured the asymmetry for the simulated CEERS images using statmorph, in which no minimization of the background asymmetry was done, and we denote the result as A_ noisy-A_ bkg, no min. We performed the measurement with minimization using Eq. (<ref>) and denote the result as A_ noisy-A_ bkg, min. We performed the measurement using Eq. (<ref>), adopting f_1=1 and f_2=√(2), and denote the result as A_ WZ16. We plot these as a function of A_ noise-free in Fig. <ref>.
In a perfect noise correction, A_ noise-free should be accurately reproduced. We first confirm in the panel (a) that A_ bkg, no min severely overestimates noise contribution, leading to an underestimation of asymmetry and even physically meaningless negative values. As shown in panel (b), A_ bkg, min works better than A_ bkg, no min, but the overestimation of noise contribution is still significant, especially at higher A_ noise-free values. The mean difference between A_ noisy-A_ bkg, min and A_ noise-free gives -0.032. This correlation is sublinear. Panel (c) shows that A_ WZ16 underestimate noise contribution by 0.037, while the main improvement is that the correlation between A_ WZ16 and A_ noise-free is brought close to linear.
The problem related to the underestimation of noise contribution in using Eq. (<ref>) may be solved if a larger fraction of pixels are defined as noisy pixels. We searched for an optimal solution by testing different value of f_1 and f_2. We calculated the asymmetry using Eq. (<ref>) by adopting f_1=0.9, 0.95, 0.1, ..., 3.0 and f_2=0.9, 0.95, 0.1, ..., 3.0. For each pair of f_1 and f_2, we calculated the Pearson correlation coefficient, slop, and scatter between the resulting asymmetry and A_ noise-free. The coefficient, | slop-1|, and scatter as a function of f_1 and f_2 are plotted in Fig. <ref>.
The results show that the coefficient and scatter reach their maximum and minimum values at f_2≈2.3, while the slop reaches 1 at f_2≈1.7. The values suggested by <cit.> are marked by a red triangle, which would get a nearly linear relation, but the relation will be dispersive. Constructing the tightest relation will make the relation sublinear. It is thus not feasible to use Eq. (<ref>) to obtain corrected asymmetry that has a relationship with A_ noise-free that is very tight and linear simultaneously. As a compromise, we derive the optimal f_1 and f_2 by requiring that the slop is greater than 0.9 and less than 1.1, and the scatter achieves the minimum value, resulting in f_1=2.25 and f_2=2.1. Minor changes in the criterion do not significantly impact our results. The optimal choice is marked by black circle in Fig. <ref>. These values are used to derive noise-removed asymmetry, denoted as A_ Improve, which is plotted against A_ noise-free in the panel (d) of Fig. <ref>. It is shown that the correlation between A_ Improve and A_ noise-free is tight (ρ=0.93), almost linear (slop =0.9), and has a small residual bias (Δ=0.011) and small scatter (σ=0.025). The scatter is also smaller than the result obtained by the noise correction proposed by <cit.>, which yield 0.044 (see their Fig. 15). Our improved noise correction exhibits enhanced performance in correcting for noise effects and reproducing the noise-free asymmetry. Nevertheless, we would like to point out that there is still a minor underestimation of noise contribution present, that is, an overestimation of asymmetry, when A_ noise-free≲ 0.1.
§.§.§ Correction of resolution effects
In addition to the noise contribution, the resolution effect also plays a significant role in affecting asymmetry measurements. This effect becomes particularly crucial when studying high-redshift galaxies, which are less spatially resolved. Focusing on changes in the apparent galaxy size relative to a fixed pixel size, <cit.> demonstrated that measured galaxy asymmetry is increasingly reduced, as the apparent size of a galaxy decreases. We measured galaxy asymmetries using re-binned images without PSF convolution, and found that the binning effect is negligible compared to the PSF effect. It has also been found that galaxy asymmetries are underestimated at higher redshifts <cit.>; however, this should be partly attributed to the overestimation of noise contribution using the conventional method, as discussed in Sect. <ref>, and partly to resolution effects. An accurate correction method for biases caused by resolution effects has not been developed yet.
To account for resolution effects, we first measured intrinsic asymmetries from the high-quality, K-corrected DESI images and denote them as A_ True. We then measured A from the N-FWHM images and present A as a function of A_ True in the top row of Fig. <ref>. The first, second, third, and final columns display results obtained at the image resolution levels R_p, True/ FWHM=1.98, 4.55, 10.45, and 24, respectively. The results show that when galaxies are the most intrinsically symmetric (A_ True≲0.05) and are at low resolution (R_p, True/ FWHM≲4.55), the measured A slightly overestimates A_ True. This overestimation is attributed to the asymmetry of the JWST PSF. Figure <ref> displays the two-time oversampling F200W PSF on the left and the difference between the PSF and its 180-rotational image on the right. The PSF asymmetry causes the convolution to make intrinsically symmetric galaxies appear asymmetric. In contrast, when galaxies are intrinsically asymmetric (A_ True≳ 0.05), the measured A values are underestimated, with a greater extent at higher A_ True. This underestimation is due to PSF convolution smoothing out asymmetric structures, particularly the most asymmetric ones. The smoothing effect is more efficient than the effect caused by PSF asymmetry in asymmetric galaxies.
The A_ True–A relations are nonlinear, especially at low resolutions. We fit the data at each resolution level with the following function:
A = A_ True· (A_ True/D)^T + A_0,
where D and T control the flatness of the function and A_0 is the y-intercept representing the asymmetry of a fully symmetric galaxy after convolving a asymmetric PSF. The best-fit functions are plotted as black solid curves. We reverse Eq. (<ref>) to estimate the intrinsic value for a given measured A value. However, this operation is only available for A≥ A_0. To estimate the intrinsic value for A< A_0, we replace A with 2A-A_0, which is the symmetric value of A with respect to y=A_0, to do the estimation. The correction function is as follows:
A_ cor = D·(Δ A/D)^1/1+T, where Δ A =
A - A_0, if A ≥ A_0
2A_0 - A, if A< A_0
.
The second row in Fig. <ref> plots the correlations between A_ cor and A_ True. The mean difference (Δ_A) and scatter (σ_A) between them are presented at the top of each panel.
At high resolution,
Δ_A is small (|Δ_A| < 0.01). However,
At low resolution, a non-negligible Δ_A value (∼ 0.04) is observed, indicating that a residual small bias still exists when the resolution is sufficiently low. Meanwhile, σ_A is significant (∼ 0.1) at R_p, True/ FWHM<4.55, which stems from the flatness of the A_ True–A relation.
To obtain the correction function for a given resolution level, we study the parameters T, D, and A_0 as a function of R_p, True/FWHM in the bottom row of Fig. <ref>. We respectively fit the correlations between T, D, and A_0 with R_p, True/FWHM using the functions:
T = ξ_1^T·(x+ξ_2^T)^ ξ_3^T, where x = R_p, True/ FWHM,
D = τ_1 ·exp(τ_2· x)+τ_3 ·ln (x+τ_4), where x = R_p, True/ FWHM,
and
A_0 = ξ_1^A_0·(x+ξ_2^A_0)^ ξ_3^A_0, where x = R_p, True/ FWHM.
The best-fit functions are marked with black solid curves. The correction function Eq. (<ref>) can therefore be derived for a given galaxy size through Eq. (<ref>)–(<ref>). To estimate the statistical uncertainty for a given resolution level, we plot σ_A against R_p, True/FWHM in Fig. <ref> and fit them using the following function:
σ_A = η_1^A · x^ η_2^A, where x = R_p, True/ FWHM.
The best-fit function is plotted as a black solid curve. The best-fit parameters ξ_1^T, ξ_2^T, ξ_3^T, τ_1, τ_2, τ_3, τ_4, ξ_1^A_0, ξ_2^A_0, ξ_3^A_0, η_1^A, and η_2^A for the results based on F115W, F150W, and F200W PSF are listed in Table <ref>. Those based on F277W, F356W, and F444W are shown in Table <ref>.
§.§ Sérsic index and axis ratio
We use IMFIT <cit.> to carry out a two-dimensional (2D) fitting using a single Sérsic model <cit.> on the images. This allows us to determine the half-light radius (R_50^ fit), Sérsic index (n), and axis ratio (q). We use a two-time oversampling PSFs in our analysis. IMFIT finds the optimal model by adjusting the 2D function parameters through nonlinear minimization of total χ^2. The Levenberg-Marquardt algorithm is used for the χ^2 minimization. We set the lower and upper bounds of n for the fitting to 0.5 and 6, respectively. The bias and uncertainty in measuring R_50^ fit have been discussed in Sect. <ref>.
We denote intrinsic n and q values measured from the high-quality, K-corrected DESI images as n_ True and q_ True, respectively. We plot the difference between n measured from the N-FWHM image and n_ True, that is n-n_ True, as a function of R_p, True/FWHM in the first row of Fig. <ref>. The mean difference, Δ_n=-0.11, indicates that the impact of resolution degradation is small, properly because IMFIT already accounts for PSF effects. As we go on to show in Sect. <ref>, there is no obvious difference on average between intrinsic values and those measured on the simulated CEERS images. Our findings are in line with previous studies based on model galaxies (; ; ; but see ). We thus refrain from developing a correction function for n. The second row plots the profile of measurement uncertainty σ_n, which we fit with the function:
σ_n = η_1^n · x^ η_2^n, where x = R_p, True/ FWHM.
The black curve represents the best-fit function. The measurement of n becomes more uncertain with lower resolutions.
Similarly, we calculate q-q_ True and plot it as a function of R_p, True/FWHM in the third row of Fig. <ref>. The mean difference is very small (Δ_q=-0.005), indicating little or no measurement bias. The bottom panel presents the profile of statistical uncertainty σ_q, which we fit with the function:
σ_q = η_1^q · x^ η_2^q, where x = R_p, True/ FWHM.
The black curve marks the best-fit function. The measurement of q becomes more uncertain at lower resolutions, yet these uncertainties remain negligible when compared to the broad dynamical range of q. The best-fit parameters η_1^n, η_2^n, η_1^q, and η_2^q for the results based on F115W, F150W, and F200W PSFs are provided in Table <ref>.
§ APPLICATION TO SIMULATED CEERS IMAGES
In this section, we describe how we apply the correction functions to the morphological quantities measured from the simulated CEERS images to understand the effectiveness of these functions. We start by measuring R_p, R^ cog_50, R^ fit_50, C, A_ C00, n, and q on the simulated CEERS images. The corrections are summarized as follows. We correct size measurements using Eq. (<ref>) and (<ref>) to obtain R_p, cor and R^ cog_50, cor, respectively. The resolution level is calculated as R_p, cor/ FWHM. We obtain the parameters k and b through Eq. (<ref>) and Eq. (<ref>), respectively, and then calculate the bias-corrected concentration (C_ cor) using the correction function Eq. (<ref>). We calculate the noise-removed asymmetry using Eq. (<ref>), with f_1=2.25 and f_2=2.1, obtain the parameters T, D, and A_0 through Eq. (<ref>), (<ref>), and (<ref>), respectively, and, finally, we calculate the bias-corrected asymmetry (A_ cor) using the correction function Eq. (<ref>). There is no correction for the parameters (R^ fit_50, n, and q) measured through Sérsic fitting using IMFIT. The statistical uncertainties are computed using the uncertainty function for each morphological measurement, but the derived uncertainties are not plotted in the figures of the following discussion for the sake of clarity.
In Fig. <ref>, we plot R_p and R_p, cor as a function of R_p, True in the top two rows, R^ cog_50 and R^ cog_50, cor as a function of R^ cog_50, True in the middle two rows, and R^ fit_50 as a function of R^ fit_50, True in the bottom row. Results for z=0.75, 1.5, 2.25, and 3.0 are presented in columns one through four, respectively, and the difference and scatter between y-axis and x-axis values are indicated in the top of each panel (Figs. <ref> and <ref> follow the same strategy). Values of R_p and R^ cog_50 slightly overestimate their intrinsic values by 0.06 arcsec and 0.03 arcsec, respectively. After bias correction, these overestimations are reduced to 0.01 arcsec, which is negligibly small. The correlations between R^ fit_50 and R^ fit_50, True exhibit a very small average offset, but they display a slightly larger scatter compared with the non-parametric measures of galaxy sizes.
The top two rows of Fig. <ref> presents the plots of C and C_ cor as a function of C_ True. CEERS galaxies with higher C_ True tend to have measured C to be underestimated, and since the galaxies are angularly smaller at higher redshifts, the underestimation becomes more severe. Scatters are especially large at high redshifts. As early-type galaxies are more centrally concentrated than late-type galaxies, the C values of early-type galaxies will be more underestimated without correction. After bias correction, C_ cor correlates well with C_ True, exhibiting a small average offset of approximately -0.02 and with an average scatter of around 0.22.
The bottom two rows of Fig. <ref> presents the plots of A_ C00 and A_ cor as a function of A_ True. Our results show that A_ C00 underestimate A_ True on average, especially at high A_ True values or at high redshifts, due to the overestimation of noise contribution and the effects of PSF smoothing. Since late-type galaxies have higher intrinsic asymmetry than early-type galaxies <cit.>, the measured asymmetry for late-type galaxies (if not corrected) will be more severely underestimated compared to that of early-type galaxies. Our findings suggest that employing the conventional algorithm for calculating galaxy asymmetry, as proposed in <cit.>, is unsuitable and inadequate to measure the intrinsic asymmetry of high-redshift galaxies observed in CEERS. In a recent study, <cit.> visually classified the morphological type of 850 galaxies at z>3 observed in JWST CEERS and used Statmorph to perform morphological measurements. They used images from the filter corresponding to the rest-frame optical emission at the redshift of the galaxy. Their results reveal that the Concentration–Asymmetry diagram does not clearly differentiate between morphological types, although peculiar galaxies exhibit slightly higher asymmetry, on average. Our findings provide a possible explanation to the results in <cit.>: the CEERS PSF and noise introduce significant bias and uncertainty to the measurements of concentration and asymmetry, causing the resulting concentration–asymmetry diagram to be degenerate across galaxy types.
The A_ True–A_ cor relations demonstrate that our improved method can accurately reproduce the intrinsic galaxy asymmetry when z≲1.5. However, when z ≳ 1.5, the relations become increasingly dispersive with higher redshifts. We divide the simulated CEERS galaxies into two groups: angularly small galaxies with R_p, True/ FWHM<5, marked with red circles, and angularly large galaxies with R_p, cor/ FWHM≥ 5, marked with blue crosses. The results show that, for angularly large galaxies, A_ cor reproduces A_ True well with a mean difference of 0.01 across all redshifts, and a scatter increasing from 0.03 at z=0.75 to 0.06 at z=3.0. In contrast, for angular small galaxies, the A_ cor still overestimates A_ True with a mean difference of approximately 0.1 across all redshifts and with scatter increasing from 0.06 at z=0.75 to 0.09 at z=3.0. The latter is caused by the incomplete removal of noise correction for the most symmetric galaxies, even when using our improved noise correction (Fig. <ref>), and by the flatness of the correction function (Fig. <ref>). Therefore, our asymmetry correction function is only efficient for angularly large galaxies (R_p, cor/ FWHM≥ 5) observed in JWST CEERS.
In Fig. <ref>, we plot n against n_ True in the top row and q against q_ True in the bottom row. The average difference between n and n_ True is small, approximately -0.03, with an average scatter of around 0.5. This suggest that the Sérsic fitting can extract the index without significant bias for galaxies observed in JWST CEERS, but the statistical uncertainty should be appropriately considered. The q–q_ True relations are quite tight with negligible offset and scatter, indicating that the Sérsic fitting can robustly extract the axis ratio.
§ SUMMARY AND CONCLUSIONS
Early JWST studies have shown that galaxies with established disk and spheroidal morphologies span the full redshift range, revealing that the Hubble sequence was already in place at the early universe <cit.>. However, potential biases and uncertainties in characterizing the morphologies of these high-redshift galaxies may significantly impact the results. To address this issue, we defined a sample of nearby galaxies based on the DESI survey, conststing of 1816 galaxies with stellar masses ranging from 10^9.75 M_⊙ to 10^11.25 M_⊙. High-quality DESI images of these nearby galaxies allow for an accurate determination of the galaxy morphology. We removed the contamination from foreground stars or background galaxies in the DESI images and used the resulting cleaned images of the g, r, and z bands as input to compute artificial images of galaxies located at 0.75≤ z≤ 3 and observed at rest-frame optical wavelengths in CEERS. We used the F115W filter for z=0.75 and 1, the F150W filter for z=1.25, 1.5, and 1.75, and the F200W filter for z=2.0 to 3.0. For the artificially redshifting process, we take into account angular size changes due to distance and intrinsic evolution, flux changes due to cosmological dimming and intrinsic evolution, spectral change, and changes in resolution and noise level. The rest-frame flux is obtained by pixel-by-pixel K correction. The simulated images are binned onto the pixel scale of 0.03 arcsec/pixel and corrected to the JWST PSF. A patch of real CEERS background and poisson noise from galaxy light are then added.
We focus on quantifying the biases and uncertainties for six widely used morphological quantities: Petrosian radius (R_p), half-light radius (R_50), asymmetry (A), concentration (C), axis ratio (q), and Sérsic index (n). The resolution of CEERS images emerges as the primary factor influencing these measurements, while the typical CEERS noise predominantly impacts the computation of A. We improve the method for removing the contribution of noise from the computation of A. To correct for the resolution effects, we reduce the DESI image resolution, conduct each measurement, and compare these re-measured values with their intrinsic values to understand resolution effects and to derive formulae as a function of resolution level, defined as R_p/ FWHM, for correcting the biases and uncertainties. Finally, we apply the correction functions to our artificially redshifted CEERS images to validate the methods. Our main results are as follows.
* R_p and R_50, measured using non-parametric approaches, are slightly overestimated due to PSF smoothing, and this overestimation does not significantly depends on the resolution level. We derived Eq. (<ref>) and Eq. (<ref>) to correct these biases. In comparison, R_50 measured using 2D image fitting proves to be unbiased. The functions of statistical uncertainties for these three parameters are provided.
* C is underestimated due to PSF smoothing, with the effect being more pronounced for higher C values or lower resolutions. The bias can be corrected using correction function of Eq. (<ref>) and the statistical uncertainty is given by Eq. (<ref>).
* By incorporating a more accurate noise effect removal procedure, we improved the computation of A over existing methods, which can often overestimate, underestimate, or lead to significant scatter of noise contributions. We show that A of the most intrinsically symmetric galaxies are overestimated due to the PSF asymmetry. In contrast, A of intrinsically asymmetric galaxies are underestimated owing to smoothing, particularly for large A values and at lower resolutions.
For angularly large CEERS galaxies where R_p/ FWHM≥ 5, the biases can be robustly corrected using Eq. (<ref>). However, for smaller galaxies, these biases cannot be completely removed. When studying asymmetry, the statistical uncertainty given by Eq. (<ref>) should be taken into account.
* The measurements of n and q through 2D image fitting have negligible biases. The statistical uncertainty for axis ratio is negligible, whereas the uncertainty for the Sérsic index is more significant and should be properly considered.
* Although our primary focus is on studying the optical morphology of simulated galaxies at z≤3 observed with F115W, F150W, and F200W filters, we also provide the correction for F277W, F356W, and F444W filters, which may be useful for studying optical morphology of galaxies at higher redshifts. The parameters of the correction functions vary across different filters and are provided in Tables <ref> and <ref>.
These tests establish a solid foundation for future quantitative statistical studies aimed at comprehending the cosmological evolution of galaxy morphology. The dataset of artificially redshifted images also holds significant value for additional studies, such as those focused on spiral arms and bars.
We are grateful to the anonymous referee for their invaluable feedback and insightful comments that greatly improved the quality of this paper.
SYY acknowledges the support by the Alexander von Humboldt Foundation.
SYY thank Luis C. Ho for inspiring him to pursue research on high-redshift galaxies.
We thank the fruitful discussion with John Moustakas. We thank Song Huang for creating the Slack workspace that enabled SYY to find collaboration with CC and FS.
This research made use of Photutils, an Astropy package for
detection and photometry of astronomical sources <cit.>.
We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr).
This work is based on observations taken by the 3D-HST Treasury Program (GO 12177 and 12328) with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID #2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID #2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID #2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF’s NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation.
NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy.
This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l’Espai (IEEC/CSIC), the Institut de Fisica d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, NSF’s NOIRLab, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University.
BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program “The Emergence of Cosmological Structures” Grant # XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant # 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant # 12120101003, # 11433005).
The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration.
The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO.
The Siena Galaxy Atlas was made possible by funding support from the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0020086 and from the National Science Foundation under grant AST-1616414.
97
natexlab#1#1
[Abraham et al.(1996)Abraham, van den Bergh, Glazebrook,
Ellis, Santiago, Surma, & Griffiths]Abraham1996
Abraham, R. G., van den Bergh, S., Glazebrook, K., et al. 1996, ,
107, 1
[Arnouts et al.(2005)Arnouts, Schiminovich, Ilbert,
Tresse, Milliard, Treyer, Bardelli, Budavari, Wyder, Zucca, Le
Fèvre, Martin, Vettolani, Adami, Arnaboldi, Barlow, Bianchi,
Bolzonella, Bottini, Byun, Cappi, Charlot, Contini, Donas,
Forster, Foucaud, Franzetti, Friedman, Garilli, Gavignaud,
Guzzo, Heckman, Hoopes, Iovino, Jelinsky, Le Brun, Lee,
Maccagni, Madore, Malina, Marano, Marinoni, McCracken, Mazure,
Meneux, Merighi, Morrissey, Neff, Paltani, Pellò, Picat,
Pollo, Pozzetti, Radovich, Rich, Scaramella, Scodeggio,
Seibert, Siegmund, Small, Szalay, Welsh, Xu, Zamorani, &
Zanichelli]Arnouts2005
Arnouts, S., Schiminovich, D., Ilbert, O., et al. 2005, , 619, L43
[Bagley et al.(2023)Bagley, Finkelstein, Koekemoer,
Ferguson, Arrabal Haro, Dickinson, Kartaltepe, Papovich,
Pérez-González, Pirzkal, Somerville, Willmer, Yang, Yung,
Fontana, Grazian, Grogin, Hirschmann, Kewley, Kirkpatrick,
Kocevski, Lotz, Medrano, Morales, Pentericci, Ravindranath,
Trump, Wilkins, Calabrò, Cooper, Costantin, de la Vega,
Hilbert, Hutchison, Larson, Lucas, McGrath, Ryan, Wang, &
Wuyts]Bagley2023
Bagley, M. B., Finkelstein, S. L., Koekemoer, A. M., et al. 2023,
, 946, L12
[Barden et al.(2008)Barden, Jahnke, &
Häußler]Barden2008
Barden, M., Jahnke, K., & Häußler, B. 2008, , 175, 105
[Barden et al.(2005)Barden, Rix, Somerville, Bell,
Häußler, Peng, Borch, Beckwith, Caldwell, Heymans,
Jahnke, Jogee, McIntosh, Meisenheimer, Sánchez, Wisotzki, &
Wolf]Barden2005
Barden, M., Rix, H.-W., Somerville, R. S., et al. 2005, , 635, 959
[Bershady et al.(2000)Bershady, Jangren, &
Conselice]Bershady2000
Bershady, M. A., Jangren, A., & Conselice, C. J. 2000, , 119, 2645
[Blanton & Roweis(2007)]Blanton2007
Blanton, M. R. & Roweis, S. 2007, , 133, 734
[Block et al.(2001)Block, Puerari, Takamiya, Abraham,
Stockton, Robson, & Holland]Block2001
Block, D. L., Puerari, I., Takamiya, M., et al. 2001, , 371, 393
[Blum et al.(2016)Blum, Burleigh, Dey, Schlegel,
Meisner, Levi, Myers, Lang, Moustakas, Patej, Valdes, Kneib,
Huanyuan, Nord, Olsen, Delubac, Saha, James, Walker, & DECaLS
Team]Blum2016
Blum, R. D., Burleigh, K., Dey, A., et al. 2016, in American
Astronomical Society Meeting Abstracts, Vol. 228, American Astronomical
Society Meeting Abstracts #228, 317.01
[Boquien et al.(2019)Boquien, Burgarella, Roehlly, Buat,
Ciesla, Corre, Inoue, & Salas]Boquien2019
Boquien, M., Burgarella, D., Roehlly, Y., et al. 2019, , 622, A103
[Bouwens et al.(2004)Bouwens, Illingworth, Blakeslee,
Broadhurst, & Franx]Bouwens2004
Bouwens, R. J., Illingworth, G. D., Blakeslee, J. P., Broadhurst,
T. J., & Franx, M. 2004, , 611, L1
[Bradley et al.(2022)Bradley, Sipőcz, Robitaille, Tollerud,
Vinícius, Deil, Barbary, Wilson, Busko, Donath, Günther, Cara, Lim,
Meßlinger, Conseil, Bostroem, Droettboom, Bray, Bratholm, Barentsen, Craig,
Rathi, Pascual, Perren, Georgiev, de Val-Borro, Kerzendorf, Bach, Quint, &
Souchereau]Bradley2022
Bradley, L., Sipőcz, B., Robitaille, T., et al. 2022, astropy/photutils:
1.5.0
[Brammer et al.(2012)Brammer, van Dokkum, Franx,
Fumagalli, Patel, Rix, Skelton, Kriek, Nelson, Schmidt,
Bezanson, da Cunha, Erb, Fan, Förster Schreiber, Illingworth,
Labbé, Leja, Lundgren, Magee, Marchesini, McCarthy,
Momcheva, Muzzin, Quadri, Steidel, Tal, Wake, Whitaker, &
Williams]Brammer2012
Brammer, G. B., van Dokkum, P. G., Franx, M., et al. 2012, , 200,
13
[Bruzual & Charlot(2003)]bc03
Bruzual, G. & Charlot, S. 2003, , 344, 1000
[Buitrago et al.(2008)Buitrago, Trujillo, Conselice,
Bouwens, Dickinson, & Yan]Buitrago2008
Buitrago, F., Trujillo, I., Conselice, C. J., et al. 2008, , 687,
L61
[Calzetti et al.(2000)Calzetti, Armus, Bohlin, Kinney,
Koornneef, & Storchi-Bergmann]Calzetti2000
Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, , 533, 682
[Chabrier(2003)]Chabrier2003
Chabrier, G. 2003, , 115, 763
[Chen et al.(2022)Chen, Gao, Hsu, Liao, Ling, Lo,
Smail, Wang, & Wang]Chen2022
Chen, C.-C., Gao, Z.-K., Hsu, Q.-N., et al. 2022, , 939, L7
[Cheng et al.(2023)Cheng, Huang, Smail, Yan, Cohen,
Jansen, Windhorst, Ma, Koekemoer, Willmer, Willner, Diego,
Frye, Conselice, Ferreira, Petric, Yun, Gim, Polletta,
Duncan, Holwerda, Röttgering, Honor, Hathi, Kamieneski,
Adams, Coe, Broadhurst, Summers, Tompkins, Driver, Grogin,
Marshall, Pirzkal, Robotham, & Ryan]Cheng2023
Cheng, C., Huang, J.-S., Smail, I., et al. 2023, , 942, L19
[Cheng et al.(2022)Cheng, Yan, Huang, Willmer, Ma, &
Orellana-González]Cheng2022b
Cheng, C., Yan, H., Huang, J.-S., et al. 2022, , 936, L19
[Conselice(2003)]Conselice2003
Conselice, C. J. 2003, , 147, 1
[Conselice et al.(2000)Conselice, Bershady, &
Jangren]Conselice2000
Conselice, C. J., Bershady, M. A., & Jangren, A. 2000, , 529, 886
[Conselice et al.(2008)Conselice, Rajgor, &
Myers]Conselice2008
Conselice, C. J., Rajgor, S., & Myers, R. 2008, , 386, 909
[Daddi et al.(2005)Daddi, Renzini, Pirzkal, Cimatti,
Malhotra, Stiavelli, Xu, Pasquali, Rhoads, Brusa, di Serego
Alighieri, Ferguson, Koekemoer, Moustakas, Panagia, &
Windhorst]Daddi2005
Daddi, E., Renzini, A., Pirzkal, N., et al. 2005, , 626, 680
[Davari et al.(2016)Davari, Ho, & Peng]Davari2016
Davari, R., Ho, L. C., & Peng, C. Y. 2016, , 824, 112
[Dey et al.(2019)Dey, Schlegel, Lang, Blum, Burleigh,
Fan, Findlay, Finkbeiner, Herrera, Juneau, Landriau, Levi,
McGreer, Meisner, Myers, Moustakas, Nugent, Patej, Schlafly,
Walker, Valdes, Weaver, Yèche, Zou, Zhou, Abareshi,
Abbott, Abolfathi, Aguilera, Alam, Allen, Alvarez, Annis,
Ansarinejad, Aubert, Beechert, Bell, BenZvi, Beutler, Bielby,
Bolton, Briceño, Buckley-Geer, Butler, Calamida, Carlberg,
Carter, Casas, Castander, Choi, Comparat, Cukanovaite, Delubac,
DeVries, Dey, Dhungana, Dickinson, Ding, Donaldson, Duan,
Duckworth, Eftekharzadeh, Eisenstein, Etourneau, Fagrelius,
Farihi, Fitzpatrick, Font-Ribera, Fulmer, Gänsicke,
Gaztanaga, George, Gerdes, Gontcho, Gorgoni, Green, Guy,
Harmer, Hernandez, Honscheid, Huang, James, Jannuzi, Jiang,
Joyce, Karcher, Karkar, Kehoe, Kneib, Kueter-Young, Lan,
Lauer, Le Guillou, Le Van Suu, Lee, Lesser, Perreault Levasseur,
Li, Mann, Marshall, Martínez-Vázquez, Martini, du Mas des
Bourboux, McManus, Meier, Ménard, Metcalfe,
Muñoz-Gutiérrez, Najita, Napier, Narayan, Newman, Nie,
Nord, Norman, Olsen, Paat, Palanque-Delabrouille, Peng,
Poppett, Poremba, Prakash, Rabinowitz, Raichoor, Rezaie,
Robertson, Roe, Ross, Ross, Rudnick, Safonova, Saha,
Sánchez, Savary, Schweiker, Scott, Seo, Shan, Silva,
Slepian, Soto, Sprayberry, Staten, Stillman, Stupak, Summers,
Sien Tie, Tirado, Vargas-Magaña, Vivas, Wechsler, Williams,
Yang, Yang, Yapici, Zaritsky, Zenteno, Zhang, Zhang, Zhou, &
Zhou]Dey2019
Dey, A., Schlegel, D. J., Lang, D., et al. 2019, , 157, 168
[Elmegreen & Elmegreen(1985)]Elmegreen1985
Elmegreen, B. G. & Elmegreen, D. M. 1985, , 288, 438
[Elmegreen et al.(2007)Elmegreen, Elmegreen, Knapen, Buta,
Block, & Puerari]Elmegreen2007
Elmegreen, B. G., Elmegreen, D. M., Knapen, J. H., et al. 2007, ,
670, L97
[Elmegreen et al.(1992)Elmegreen, Elmegreen, &
Montenegro]Elmegreen1992
Elmegreen, B. G., Elmegreen, D. M., & Montenegro, L. 1992, , 79, 37
[Elmegreen & Elmegreen(1987)]Elmegreen1987
Elmegreen, D. M. & Elmegreen, B. G. 1987, , 314, 3
[Elmegreen et al.(2011)Elmegreen, Elmegreen, Yau,
Athanassoula, Bosma, Buta, Helou, Ho, Gadotti, Knapen,
Laurikainen, Madore, Masters, Meidt, Menéndez-Delmestre,
Regan, Salo, Sheth, Zaritsky, Aravena, Skibba, Hinz, Laine,
Gil de Paz, Muñoz-Mateos, Seibert, Mizusawa, Kim, & Erroz
Ferrer]Elmegreen2011
Elmegreen, D. M., Elmegreen, B. G., Yau, A., et al. 2011, , 737, 32
[Erwin(2015)]Erwin2015
Erwin, P. 2015, , 799, 226
[Euclid Collaboration et al.(2023)Euclid Collaboration,
Bretonnière, Kuchner, Huertas-Company, Merlin, Castellano,
Tuccillo, Buitrago, Conselice, Boucaud, Häußler,
Kümmel, Hartley, Alvarez Ayllon, Bertin, Ferrari, Ferreira,
Gavazzi, Hernández-Lang, Lucatelli, Robotham, Schefer, Wang,
Cabanac, Domínguez Sánchez, Duc, Fotopoulou, Kruk, La
Marca, Margalef-Bentabol, Marleau, Tortora, Aghanim, Amara,
Auricchio, Azzollini, Baldi, Bender, Bodendorf, Branchini,
Brescia, Brinchmann, Camera, Capobianco, Carbone, Carretero,
Castander, Cavuoti, Cimatti, Cledassou, Congedo, Conversi,
Copin, Corcione, Courbin, Cropper, Da Silva, Degaudenzi, Dinis,
Dubath, Duncan, Dupac, Dusini, Farrens, Ferriol, Frailis,
Franceschi, Fumana, Galeotta, Garilli, Gillis, Giocoli,
Grazian, Grupp, Haugan, Hoekstra, Holmes, Hormuth, Hornstrup,
Hudelot, Jahnke, Kermiche, Kiessling, Kohley, Kunz,
Kurki-Suonio, Ligori, Lilje, Lloro, Mansutti, Marggraf,
Markovic, Marulli, Massey, McCracken, Medinaceli, Melchior,
Meneghetti, Meylan, Moresco, Moscardini, Munari, Niemi,
Padilla, Paltani, Pasian, Pedersen, Percival, Pettorino,
Polenta, Poncet, Pozzetti, Raison, Rebolo, Renzi, Rhodes,
Riccio, Romelli, Rosset, Rossetti, Saglia, Sapone, Sartoris,
Schneider, Secroun, Seidel, Sirignano, Sirri, Skottfelt,
Starck, Tallada-Crespí, Taylor, Tereno, Toledo-Moreo,
Tutusaus, Valentijn, Valenziano, Vassallo, Wang, Weller,
Zamorani, Zoubian, Andreon, Bardelli, Colodro-Conde, Di
Ferdinando, Graciá-Carpio, Lindholm, Mauri, Mei, Scottez,
Zucca, Baccigalupi, Ballardini, Bernardeau, Biviano, Borgani,
Borlaff, Burigana, Cappi, Carvalho, Casas, Castignani, Cooray,
Coupon, Courtois, Davini, De Lucia, Desprez, Escartin,
Escoffier, Fabricius, Farina, Fontana, Ganga, Garcia-Bellido,
George, Gozaliasl, Hildebrandt, Hook, Ilbert, Ilić,
Joachimi, Kansal, Keihanen, Kirkpatrick, Loureiro, Macias-Perez,
Magliocchetti, Maoli, Marcin, Martinelli, Martinet, Maturi,
Monaco, Morgante, Nadathur, Nucita, Patrizii, Popa, Porciani,
Potter, Pourtsidou, Pöntinen, Reimberg, Sánchez, Sakr,
Schirmer, Sefusatti, Sereno, Stadel, Teyssier, Valiviita, van
Mierlo, Veropalumbo, Viel, Weaver, & Scott]Euclid2023XXVI
Euclid Collaboration, Bretonnière, H., Kuchner, U., et al. 2023,
, 671, A102
[Ferreira et al.(2022a)Ferreira, Adams,
Conselice, Sazonova, Austin, Caruana, Ferrari, Verma, Trussler,
Broadhurst, Diego, Frye, Pascale, Wilkins, Windhorst, &
Zitrin]Ferreira2022a
Ferreira, L., Adams, N., Conselice, C. J., et al. 2022a,
, 938, L2
[Ferreira et al.(2022b)Ferreira, Conselice,
Sazonova, Ferrari, Caruana, Tohill, Lucatelli, Adams, Irodotou,
Marshall, Roper, Lovell, Verma, Austin, Trussler, &
Wilkins]Ferreira2022b
Ferreira, L., Conselice, C. J., Sazonova, E., et al.
2022b, arXiv e-prints, arXiv:2210.01110
[Finkelstein et al.(2022)Finkelstein, Bagley, Haro,
Dickinson, Ferguson, Kartaltepe, Papovich, Burgarella, Kocevski,
Huertas-Company, Iyer, Koekemoer, Larson, Pérez-González,
Rose, Tacchella, Wilkins, Chworowsky, Medrano, Morales,
Somerville, Aaron Yung, Fontana, Giavalisco, Grazian, Grogin,
Kewley, Kirkpatrick, Kurczynski, Lotz, Pentericci, Pirzkal,
Ravindranath, Ryan, Trump, Yang, Almaini, Amorín,
Annunziatella, Backhaus, Barro, Behroozi, Bell, Bhatawdekar,
Bisigello, Bromm, Buat, Buitrago, Calabrò, Casey,
Castellano, Chávez Ortiz, Ciesla, Cleri, Cohen, Cole,
Cooke, Cooper, Cooray, Costantin, Cox, Croton, Daddi,
Davé, de La Vega, Dekel, Elbaz, Estrada-Carpenter, Faber,
Fernández, Finkelstein, Freundlich, Fujimoto,
García-Argumánez, Gardner, Gawiser, Gómez-Guijarro,
Guo, Hamblin, Hamilton, Hathi, Holwerda, Hirschmann, Hutchison,
Jaskot, Jha, Jogee, Juneau, Jung, Kassin, Le Bail, Leung,
Lucas, Magnelli, Mantha, Matharu, McGrath, McIntosh, Merlin,
Mobasher, Newman, Nicholls, Pandya, Rafelski, Ronayne, Santini,
Seillé, Shah, Shen, Simons, Snyder, Stanway, Straughn,
Teplitz, Vanderhoof, Vega-Ferrero, Wang, Weiner, Willmer,
Wuyts, Zavala, & Ceers Team]Finkelstein2022a
Finkelstein, S. L., Bagley, M. B., Haro, P. A., et al. 2022, ,
940, L55
[Fudamoto et al.(2022)Fudamoto, Inoue, &
Sugahara]Fudamoto2022
Fudamoto, Y., Inoue, A. K., & Sugahara, Y. 2022, , 938, L24
[Giavalisco et al.(1996)Giavalisco, Livio, Bohlin,
Macchetto, & Stecher]Giavalisco1996
Giavalisco, M., Livio, M., Bohlin, R. C., Macchetto, F. D., &
Stecher, T. P. 1996, , 112, 369
[Grogin et al.(2011)Grogin, Kocevski, Faber, Ferguson,
Koekemoer, Riess, Acquaviva, Alexander, Almaini, Ashby, Barden,
Bell, Bournaud, Brown, Caputi, Casertano, Cassata, Castellano,
Challis, Chary, Cheung, Cirasuolo, Conselice, Roshan Cooray,
Croton, Daddi, Dahlen, Davé, de Mello, Dekel, Dickinson,
Dolch, Donley, Dunlop, Dutton, Elbaz, Fazio, Filippenko,
Finkelstein, Fontana, Gardner, Garnavich, Gawiser, Giavalisco,
Grazian, Guo, Hathi, Häussler, Hopkins, Huang, Huang,
Jha, Kartaltepe, Kirshner, Koo, Lai, Lee, Li, Lotz, Lucas,
Madau, McCarthy, McGrath, McIntosh, McLure, Mobasher,
Moustakas, Mozena, Nandra, Newman, Niemi, Noeske, Papovich,
Pentericci, Pope, Primack, Rajan, Ravindranath, Reddy, Renzini,
Rix, Robaina, Rodney, Rosario, Rosati, Salimbeni, Scarlata,
Siana, Simard, Smidt, Somerville, Spinrad, Straughn, Strolger,
Telford, Teplitz, Trump, van der Wel, Villforth, Wechsler,
Weiner, Wiklind, Wild, Wilson, Wuyts, Yan, & Yun]Grogin2011
Grogin, N. A., Kocevski, D. D., Faber, S. M., et al. 2011, , 197,
35
[Guo et al.(2023)Guo, Jogee, Finkelstein, Chen, Wise,
Bagley, Barro, Wuyts, Kocevski, Kartaltepe, McGrath, Ferguson,
Mobasher, Giavalisco, Lucas, Zavala, Lotz, Grogin,
Huertas-Company, Vega-Ferrero, Hathi, Arrabal Haro, Dickinson,
Koekemoer, Papovich, Pirzkal, Yung, Backhaus, Bell,
Calabrò, Cleri, Coogan, Cooper, Costantin, Croton, Davis,
Dekel, Franco, Gardner, Holwerda, Hutchison, Pandya,
Pérez-González, Ravindranath, Rose, Trump, de la Vega, &
Wang]Guo2023
Guo, Y., Jogee, S., Finkelstein, S. L., et al. 2023, , 945, L10
[Hubble(1926)]Hubble1926
Hubble, E. P. 1926, , 64, 321
[Jacobs et al.(2023)Jacobs, Glazebrook, Calabrò, Treu,
Nannayakkara, Jones, Merlin, Abraham, Stevens, Vulcani, Yang,
Bonchi, Boyett, Bradač, Castellano, Fontana, Marchesini,
Malkan, Mason, Morishita, Paris, Santini, Trenti, &
Wang]Jacobs2023
Jacobs, C., Glazebrook, K., Calabrò, A., et al. 2023, , 948,
L13
[Kartaltepe et al.(2023)Kartaltepe, Rose, Vanderhoof,
McGrath, Costantin, Cox, Yung, Kocevski, Wuyts, Ferguson,
Bagley, Finkelstein, Amorín, Andrews, Haro, Backhaus,
Behroozi, Bisigello, Calabrò, Casey, Coogan, Cooper,
Croton, de la Vega, Dickinson, Fontana, Franco, Grazian,
Grogin, Hathi, Holwerda, Huertas-Company, Iyer, Jogee, Jung,
Kewley, Kirkpatrick, Koekemoer, Liu, Lotz, Lucas, Newman,
Pacifici, Pandya, Papovich, Pentericci, Pérez-González,
Petersen, Pirzkal, Rafelski, Ravindranath, Simons, Snyder,
Somerville, Stanway, Straughn, Tacchella, Trump, Vega-Ferrero,
Wilkins, Yang, & Zavala]Kartaltepe2023
Kartaltepe, J. S., Rose, C., Vanderhoof, B. N., et al. 2023, ,
946, L15
[Kelvin et al.(2012)Kelvin, Driver, Robotham, Hill,
Alpaslan, Baldry, Bamford, Bland-Hawthorn, Brough, Graham,
Häussler, Hopkins, Liske, Loveday, Norberg, Phillipps,
Popescu, Prescott, Taylor, & Tuffs]Kelvin2012
Kelvin, L. S., Driver, S. P., Robotham, A. S. G., et al. 2012, ,
421, 1007
[Koekemoer et al.(2011)Koekemoer, Faber, Ferguson, Grogin,
Kocevski, Koo, Lai, Lotz, Lucas, McGrath, Ogaz, Rajan,
Riess, Rodney, Strolger, Casertano, Castellano, Dahlen,
Dickinson, Dolch, Fontana, Giavalisco, Grazian, Guo, Hathi,
Huang, van der Wel, Yan, Acquaviva, Alexander, Almaini, Ashby,
Barden, Bell, Bournaud, Brown, Caputi, Cassata, Challis,
Chary, Cheung, Cirasuolo, Conselice, Roshan Cooray, Croton,
Daddi, Davé, de Mello, de Ravel, Dekel, Donley, Dunlop,
Dutton, Elbaz, Fazio, Filippenko, Finkelstein, Frazer, Gardner,
Garnavich, Gawiser, Gruetzbauch, Hartley, Häussler,
Herrington, Hopkins, Huang, Jha, Johnson, Kartaltepe,
Khostovan, Kirshner, Lani, Lee, Li, Madau, McCarthy,
McIntosh, McLure, McPartland, Mobasher, Moreira, Mortlock,
Moustakas, Mozena, Nandra, Newman, Nielsen, Niemi, Noeske,
Papovich, Pentericci, Pope, Primack, Ravindranath, Reddy,
Renzini, Rix, Robaina, Rosario, Rosati, Salimbeni, Scarlata,
Siana, Simard, Smidt, Snyder, Somerville, Spinrad, Straughn,
Telford, Teplitz, Trump, Vargas, Villforth, Wagner, Wandro,
Wechsler, Weiner, Wiklind, Wild, Wilson, Wuyts, &
Yun]Koekemoer2011
Koekemoer, A. M., Faber, S. M., Ferguson, H. C., et al. 2011, ,
197, 36
[Labbé et al.(2003)Labbé, Rudnick, Franx, Daddi,
van Dokkum, Förster Schreiber, Kuijken, Moorwood, Rix,
Röttgering, Trujillo, van der Wel, van der Werf, & van
Starkenburg]Labbe2003
Labbé, I., Rudnick, G., Franx, M., et al. 2003, , 591, L95
[Lee et al.(2013)Lee, Giavalisco, Williams, Guo, Lotz,
Van der Wel, Ferguson, Faber, Koekemoer, Grogin, Kocevski,
Conselice, Wuyts, Dekel, Kartaltepe, & Bell]Lee2013
Lee, B., Giavalisco, M., Williams, C. C., et al. 2013, , 774, 47
[Lilly et al.(1998)Lilly, Schade, Ellis, Le Fèvre,
Brinchmann, Tresse, Abraham, Hammer, Crampton, Colless,
Glazebrook, Mallen-Ornelas, & Broadhurst]Lilly1998
Lilly, S., Schade, D., Ellis, R., et al. 1998, , 500, 75
[Lotz et al.(2004)Lotz, Primack, & Madau]Lotz2004
Lotz, J. M., Primack, J., & Madau, P. 2004, , 128, 163
[Makarov et al.(2014)Makarov, Prugniel, Terekhova,
Courtois, & Vauglin]Makarov2014
Makarov, D., Prugniel, P., Terekhova, N., Courtois, H., & Vauglin,
I. 2014, , 570, A13
[Marchesini et al.(2012)Marchesini, Stefanon, Brammer, &
Whitaker]Marchesini2012
Marchesini, D., Stefanon, M., Brammer, G. B., & Whitaker, K. E. 2012,
, 748, 126
[Martínez-García
et al.(2014)Martínez-García, Puerari, Rosales-Ortega,
González-Lópezlira, Fuentes-Carrera, & Luna]Martin2014
Martínez-García, E. E., Puerari, I., Rosales-Ortega, F. F.,
et al. 2014, , 793, L19
[Mortlock et al.(2013)Mortlock, Conselice, Hartley,
Ownsworth, Lani, Bluck, Almaini, Duncan, van der Wel,
Koekemoer, Dekel, Davé, Ferguson, de Mello, Newman, Faber,
Grogin, Kocevski, & Lai]Mortlock2013
Mortlock, A., Conselice, C. J., Hartley, W. G., et al. 2013, ,
433, 1185
[Mosleh et al.(2012)Mosleh, Williams, Franx, Gonzalez,
Bouwens, Oesch, Labbe, Illingworth, & Trenti]Mosleh2012
Mosleh, M., Williams, R. J., Franx, M., et al. 2012, , 756, L12
[Nelson et al.(2022)Nelson, Suess, Bezanson, Price, van
Dokkum, Leja, Wang, Whitaker, Labbé, Barrufet, Brammer,
Eisenstein, Heintz, Johnson, Mathews, Miller, Oesch, Sandles,
Setton, Speagle, Tacchella, Tadaki, & Weaver]Nelson2022
Nelson, E. J., Suess, K. A., Bezanson, R., et al. 2022, arXiv e-prints,
arXiv:2208.01630
[Oesch et al.(2010)Oesch, Bouwens, Carollo, Illingworth,
Trenti, Stiavelli, Magee, Labbé, & Franx]Oesch2010
Oesch, P. A., Bouwens, R. J., Carollo, C. M., et al. 2010, , 709,
L21
[Paulino-Afonso et al.(2017)Paulino-Afonso, Sobral,
Buitrago, & Afonso]Paulino-Afonso2017
Paulino-Afonso, A., Sobral, D., Buitrago, F., & Afonso, J. 2017,
, 465, 2717
[Perrin et al.(2014)Perrin, Sivaramakrishnan, Lajoie, Elliott,
Pueyo, Ravindranath, & Albert]Perrin2014
Perrin, M. D., Sivaramakrishnan, A., Lajoie, C.-P., et al. 2014, in Space
Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave,
ed. J. M. O. Jr., M. Clampin, G. G. Fazio, & H. A. MacEwen, Vol. 9143,
International Society for Optics and Photonics (SPIE), 91433X
[Petrosian(1976)]Petrosian1976
Petrosian, V. 1976, , 210, L53
[Petty et al.(2014)Petty, Armus, Charmandaris, Evans, Le
Floc'h, Bridge, Díaz-Santos, Howell, Inami, Psychogyios,
Stierwalt, & Surace]Petty2014
Petty, S. M., Armus, L., Charmandaris, V., et al. 2014, , 148, 111
[Rieke et al.(2023)Rieke, Kelly, Misselt, Stansberry,
Boyer, Beatty, Egami, Florian, Greene, Hainline, Leisenring,
Roellig, Schlawin, Sun, Tinnin, Williams, Willmer, Wilson,
Clark, Rohrbach, Brooks, Canipe, Correnti, DiFelice, Gennaro,
Girard, Hartig, Hilbert, Koekemoer, Nikolov, Pirzkal, Rest,
Robberto, Sunnquist, Telfer, Wu, Ferry, Lewis, Baum,
Beichman, Doyon, Dressler, Eisenstein, Ferrarese, Hodapp,
Horner, Jaffe, Johnstone, Krist, Martin, McCarthy, Meyer,
Rieke, Trauger, & Young]Rieke2023
Rieke, M. J., Kelly, D. M., Misselt, K., et al. 2023, , 135,
028001
[Robertson et al.(2023)Robertson, Tacchella, Johnson,
Hausen, Alabi, Boyett, Bunker, Carniani, Egami, Eisenstein,
Hainline, Helton, Ji, Kumari, Lyu, Maiolino, Nelson, Rieke,
Shivaei, Sun, Übler, Williams, Willmer, &
Witstok]Robertson2023
Robertson, B. E., Tacchella, S., Johnson, B. D., et al. 2023, ,
942, L42
[Roche et al.(1998)Roche, Ratnatunga, Griffiths, Im, &
Naim]Roche1998
Roche, N., Ratnatunga, K., Griffiths, R. E., Im, M., & Naim, A.
1998, , 293, 157
[Rodriguez-Gomez et al.(2019)Rodriguez-Gomez, Snyder, Lotz,
Nelson, Pillepich, Springel, Genel, Weinberger, Tacchella,
Pakmor, Torrey, Marinacci, Vogelsberger, Hernquist, &
Thilker]Rodriguez-Gomez2019
Rodriguez-Gomez, V., Snyder, G. F., Lotz, J. M., et al. 2019, ,
483, 4140
[Schade et al.(1995)Schade, Lilly, Crampton, Hammer, Le
Fevre, & Tresse]Schade1995
Schade, D., Lilly, S. J., Crampton, D., et al. 1995, , 451, L1
[Schade et al.(1996)Schade, Lilly, Le Fevre, Hammer, &
Crampton]Schade1996
Schade, D., Lilly, S. J., Le Fevre, O., Hammer, F., & Crampton, D.
1996, , 464, 79
[Schlafly & Finkbeiner(2011)]SF2011
Schlafly, E. F. & Finkbeiner, D. P. 2011, , 737, 103
[Schlegel et al.(1998)Schlegel, Finkbeiner, &
Davis]SFD98
Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, , 500, 525
[Scoville et al.(2023)Scoville, Faisst, Weaver, Toft,
McCracken, Ilbert, Diaz-Santos, Staguhn, Koda, Casey, Sanders,
Mobasher, Chartab, Sattari, Capak, Vanden Bout, Bongiorno,
Vlahakis, Sheth, Yun, Aussel, Laigle, & Masters]Scoville2023
Scoville, N., Faisst, A., Weaver, J., et al. 2023, , 943, 82
[Sersic(1968)]Sersic1968
Sersic, J. L. 1968, Atlas de Galaxias Australes
[Sheth et al.(2008)Sheth, Elmegreen, Elmegreen, Capak,
Abraham, Athanassoula, Ellis, Mobasher, Salvato, Schinnerer,
Scoville, Spalsbury, Strubbe, Carollo, Rich, & West]Sheth2008
Sheth, K., Elmegreen, D. M., Elmegreen, B. G., et al. 2008, , 675,
1141
[Shi et al.(2009)Shi, Rieke, Lotz, &
Perez-Gonzalez]Shi2009
Shi, Y., Rieke, G., Lotz, J., & Perez-Gonzalez, P. G. 2009, , 697,
1764
[Skelton et al.(2014)Skelton, Whitaker, Momcheva, Brammer,
van Dokkum, Labbé, Franx, van der Wel, Bezanson, Da Cunha,
Fumagalli, Förster Schreiber, Kriek, Leja, Lundgren, Magee,
Marchesini, Maseda, Nelson, Oesch, Pacifici, Patel, Price,
Rix, Tal, Wake, & Wuyts]Skelton2014
Skelton, R. E., Whitaker, K. E., Momcheva, I. G., et al. 2014, ,
214, 24
[Smith et al.(2022)Smith, Giroux, & Struck]Smith2022
Smith, B. J., Giroux, M. L., & Struck, C. 2022, , 164, 146
[Sobral et al.(2013)Sobral, Smail, Best, Geach, Matsuda,
Stott, Cirasuolo, & Kurk]Sobral2013
Sobral, D., Smail, I., Best, P. N., et al. 2013, , 428, 1128
[Speagle et al.(2014)Speagle, Steinhardt, Capak, &
Silverman]Speagle2014
Speagle, J. S., Steinhardt, C. L., Capak, P. L., & Silverman, J. D.
2014, , 214, 15
[Stone et al.(2021)Stone, Arora, Courteau, &
Cuillandre]Stone2021
Stone, C. J., Arora, N., Courteau, S., & Cuillandre, J.-C. 2021,
, 508, 1870
[Tohill et al.(2021)Tohill, Ferreira, Conselice, Bamford,
& Ferrari]Tohill2021
Tohill, C., Ferreira, L., Conselice, C. J., Bamford, S. P., &
Ferrari, F. 2021, , 916, 4
[Trujillo et al.(2007)Trujillo, Conselice, Bundy, Cooper,
Eisenhardt, & Ellis]Trujillo2007
Trujillo, I., Conselice, C. J., Bundy, K., et al. 2007, , 382,
109
[van den Bergh et al.(2002)van den Bergh, Abraham, Whyte,
Merrifield, Eskridge, Frogel, & Pogge]vandenBergh2002
van den Bergh, S., Abraham, R. G., Whyte, L. F., et al. 2002, , 123,
2913
[van der Wel et al.(2014)van der Wel, Franx, van Dokkum,
Skelton, Momcheva, Whitaker, Brammer, Bell, Rix, Wuyts,
Ferguson, Holden, Barro, Koekemoer, Chang, McGrath,
Häussler, Dekel, Behroozi, Fumagalli, Leja, Lundgren,
Maseda, Nelson, Wake, Patel, Labbé, Faber, Grogin, &
Kocevski]vanderWel2014
van der Wel, A., Franx, M., van Dokkum, P. G., et al. 2014, , 788,
28
[Wen & Zheng(2016)]Wen2016
Wen, Z. Z. & Zheng, X. Z. 2016, , 832, 90
[Whitney et al.(2019)Whitney, Conselice, Bhatawdekar, &
Duncan]Whitney2019
Whitney, A., Conselice, C. J., Bhatawdekar, R., & Duncan, K. 2019,
, 887, 113
[Whitney et al.(2020)Whitney, Conselice, Duncan, &
Spitler]Whitney2020
Whitney, A., Conselice, C. J., Duncan, K., & Spitler, L. R. 2020,
, 903, 14
[Whitney et al.(2021)Whitney, Ferreira, Conselice, &
Duncan]Whitney2021
Whitney, A., Ferreira, L., Conselice, C. J., & Duncan, K. 2021, ,
919, 139
[Williams et al.(2009)Williams, Quadri, Franx, van Dokkum,
& Labbé]Williams2009
Williams, R. J., Quadri, R. F., Franx, M., van Dokkum, P., &
Labbé, I. 2009, , 691, 1879
[Wright et al.(2010)Wright, Eisenhardt, Mainzer, Ressler,
Cutri, Jarrett, Kirkpatrick, Padgett, McMillan, Skrutskie,
Stanford, Cohen, Walker, Mather, Leisawitz, Gautier, McLean,
Benford, Lonsdale, Blain, Mendez, Irace, Duval, Liu, Royer,
Heinrichsen, Howard, Shannon, Kendall, Walsh, Larsen, Cardon,
Schick, Schwalm, Abid, Fabinsky, Naes, & Tsai]Wright2010
Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, ,
140, 1868
[Wu et al.(2022)Wu, Cai, Sun, Bian, Lin, Li, Li,
Bauer, Egami, Fan, González-López, Li, Wang, Yang,
Zhang, & Zou]Wu2022
Wu, Y., Cai, Z., Sun, F., et al. 2022, arXiv e-prints, arXiv:2208.08473
[Yeom et al.(2017)Yeom, Rey, Kim, Lee, Chung, Kim, &
Lee]Yeom2017
Yeom, B.-S., Rey, S.-C., Kim, Y., et al. 2017, Journal of Astronomy and
Space Sciences, 34, 183
[Yu & Ho(2018)]YuHo2018
Yu, S.-Y. & Ho, L. C. 2018, , 869, 29
[Yu & Ho(2019)]Yu2019
Yu, S.-Y. & Ho, L. C. 2019, , 871, 194
[Yu & Ho(2020)]YuHo2020
Yu, S.-Y. & Ho, L. C. 2020, , 900, 150
[Yu et al.(2018)Yu, Ho, Barth, & Li]Yu2018
Yu, S.-Y., Ho, L. C., Barth, A. J., & Li, Z.-Y. 2018, , 862, 13
[Yu et al.(2021)Yu, Ho, & Wang]Yu2021
Yu, S.-Y., Ho, L. C., & Wang, J. 2021, , 917, 88
[Yu et al.(2022a)Yu, Kalinova, Colombo,
Bolatto, Wong, Levy, Villanueva, Sánchez, Ho, Vogel,
Teuben, & Rubio]Yu2022a
Yu, S.-Y., Kalinova, V., Colombo, D., et al. 2022a, ,
666, A175
[Yu et al.(2022b)Yu, Xu, Ho, Wang, &
Kao]Yu2022b
Yu, S.-Y., Xu, D., Ho, L. C., Wang, J., & Kao, W.-B.
2022b, , 661, A98
[Zou et al.(2017)Zou, Zhou, Fan, Zhang, Zhou, Nie,
Peng, McGreer, Jiang, Dey, Fan, He, Jiang, Lang, Lesser,
Ma, Mao, Schlegel, & Wang]zou2017
Zou, H., Zhou, X., Fan, X., et al. 2017, , 129, 064101
|
http://arxiv.org/abs/2307.04010v1 | 20230708164045 | Understanding the Efficacy of U-Net & Vision Transformer for Groundwater Numerical Modelling | [
"Maria Luisa Taccari",
"Oded Ovadia",
"He Wang",
"Adar Kahana",
"Xiaohui Chen",
"Peter K. Jimack"
] | physics.flu-dyn | [
"physics.flu-dyn",
"cs.CE",
"cs.LG"
] |
[
Kipton Barros
August 12, 2023
===================
[]School of Civil Engineering, University of Leeds, Leeds, UK, Email: [email protected].
[]Department of Applied Mathematics, Tel-Aviv University, Tel-Aviv, Israel.
[]School of Computing, University of Leeds, Leeds, UK.
[]Department of Applied Mathematics, Tel-Aviv University, Tel-Aviv, Israel.
[]School of Civil Engineering, University of Leeds, Leeds, UK.
[]School of Computing, University of Leeds, Leeds, UK.
[
Kipton Barros
August 12, 2023
===================
This paper presents a comprehensive comparison of various machine learning models, namely U-Net <cit.>, U-Net integrated with Vision Transformers (ViT) <cit.>, and Fourier Neural Operator (FNO) <cit.>, for time-dependent forward modelling in groundwater systems. Through testing on synthetic datasets, it is demonstrated that U-Net and U-Net + ViT models outperform FNO in accuracy and efficiency, especially in sparse data scenarios. These findings underscore the potential of U-Net-based models for groundwater modelling in real-world applications where data scarcity is prevalent.
§ INTRODUCTION
Groundwater numerical models, such as MODFLOW <cit.>, are crucial for water resource management, although they are computationally demanding. To alleviate this, surrogate modelling through data-driven methods offers efficient approximations of these complex numerical techniques.
Neural Operators <cit.>, particularly the Fourier Neural Operator (FNO) <cit.>, have been at the forefront of recent advances, having shown potential to approximate arbitrary continuous functions.
However, the computational demand of FNO is particularly high during training phase while these neural operators require architectural enhancements to deliver promising results in subsurface problems <cit.>. This is evident in the work of Wen et al. <cit.>, where the integration of FNO with U-Net architecture showed improved accuracy, speed, and data efficiency in multiphase flow problems. However, Gupta and Brandstetter's work <cit.>, showing that U-Net outperforms FNOs across various fluid mechanics problems, raises a question about the necessity of neural operators when the vanilla U-Net architecture already exhibits remarkable performance.
Recently, transformers <cit.> have seen considerable success in various fields, including physical systems <cit.>, for which the datasets are typically smaller compared to other domains. Only one study explores the use of transformers in groundwater modeling <cit.>, demonstrating that the models were outperformed by both GRU and LSTM models to predict groundwater levels across various stations in France with meteorological and hydrological data.
Finally, the integration of U-Net with Transformers, as exemplified in studies like TransUNet <cit.> and ViTO <cit.>, has demonstrated their utility across a broad range of applications, particularly in the field of medical image segmentation and operator learning for inverse PDE problems. Yet, the applicability of these combinations in addressing time-dependent forward problems, real-world data scenarios, and in situations with sparse data, remain areas yet to be fully explored.
Several studies, such as the one by Brakenhoff et al. <cit.>, primarily focus on individual time series when analysing the impact of various hydrological stressors, including pumping rates, precipitation excess, and river stage variations, on groundwater levels of individual monitoring wells. While this approach provides valuable insights, it does not account for spatial correlations, thereby limiting its use to existing time series or monitoring wells. Similarly, previous comparisons have been predominantly limited to specific models like LSTM, CNNs and NARX in the context of groundwater level forecasting <cit.>, leaving room for broader explorations.
In this paper, we present a comprehensive comparison among models—specifically U-Net, U-Net integrated with Vision Transformers (U-Net+ViT), and Fourier Neural Operator (FNO)—for their efficacy in modeling time-dependent forward and inverse problems in groundwater systems. We test our model extensively on synthetic datasets, simulating conditions from the Overbetuwe region in the Netherlands, including sparse data scenarios. We show that both U-Net and U-Net+ViT are particularly well-suited to these important sparse data scenarios, with the addition of the Transformer providing enhanced predictive capability in many cases.
§ METHODOLOGY
§.§ Example of study site and data
This subsection provides context and rationale for our study via an example case study based upon the polder region of Overbetuwe in the Netherlands (Figure <ref>). This region showcases the characteristic Dutch system of water management where the area is divided into several polders in a mix of agriculture, nature, and urban environments. Alongside its sparse data and heterogeneous soil, these unique characteristics underscore the inherent complexities of water management in similar settings, making this dataset a suitable choice for our research. The subsoil is primarily composed of clay and sandy clay, with soil properties being determined via borehole and cone penetration tests. The study area features numerous observation wells for monitoring groundwater heads while well fields (indicated as groundwater usage facilities in the figure) are utilized for the extraction of drinking water. The work of Brakenhoff et al. <cit.> considers a dataset consisting of 250 head time series, with daily recordings starting from the year 1990 and drawdown attributed to the extraction from up to four well fields.
For the purposes of this study, we employ synthetic data to validate the proposed methodology, with the intention to subsequently apply the validated method to the real-world data of the Overbetuwe region. Figure <ref> represents a sample of the high-fidelity labeled dataset, which is constructed using the U.S. Geological Survey (USGS) finite-difference flow model, MODFLOW. The model is composed of a single-layer representation of a confined aquifer with a 128×128 grid.
The aquifer's heterogeneity is reflected through varying horizontal hydraulic conductivity within the bounds k ∈[0.1, 0.5] m/d. The hydraulic conductivity fields in our study are created using random fields which are then thresholded to delineate different classes.
A maximum of ten pumping wells are extracting water with variable rates in the range Q ∈[0, 30] m^3/d over a simulation period of T = 10 days. The pumping wells are located in random locations which vary for each sample. The boundary conditions are delineated as Dirichlet, with the head equal to zero, mimicking a polder encircled by ditches where a stable water level is maintained through a comprehensive network of pumping stations.
The datasets consist of N_train = 5000 training instances and N_test = 1000 testing instances. To mirror the inherent sparsity of real-world data, a data selection strategy is adopted for the test dataset. The locations of the boreholes for estimating the hydraulic conductivity are chosen following a radial distribution pattern, and a helical pattern is used for the wells monitoring hydraulic head (Figure <ref>).
§.§ Architectures
The architectures of the three models under comparison in this study encompass the U-Net structure, a U-Net with attention mechanism in the bottleneck, and the Fourier neural operator (FNO).
The U-Net architecture is designed with an encoder-decoder structure where the decoder receives the upsampled feature map, which is then concatenated with the corresponding feature map from the encoder through a skip connection. Detailed diagrams of the the U-Net encoder and decoder can be found in Figures <ref> and <ref> in Appendix A. The encoder consists of three bottleneck blocks, where each block utilizes three layers of Conv2d, Instance Normalization, and GELU activation to extract spatial features. These blocks increase the number of channels by a factor of 2 and perform downsampling with a stride of 2. The decoder is composed of a series of upsampling blocks, where each block consists of a bilinear upsampling operation (Upsample), followed by a double convolution operation. Each convolution within the decoderis followed by Instance Normalization and GELU activation function. The bottleneck consists on a single convolutional layer. In the time-dependent scenario, the time series data of the historical pumping rates is processed through two layers of feed-forward neural network (FNN) prior to being concatenated to the input for the latent space representation (Figure <ref>).
The second model, here called UNet+ViT, employs the Vision Transformer (ViT) <cit.>, in the latent space representation of the U-Net, as per implementation of TransUNet <cit.> and ViTO <cit.>. The input is tokenized into a sequence of flattened 2D patches, each of size 1×1. Positional information is retained by employing trainable convolutional projection to learn and add specific position embeddings to the patch embeddings. The structure of the Transformer includes L blocks, with each block comprising Multi-Head Attention (MSA) and FNN. This configuration involves the use of 2 blocks, each with 2 Multihead Self-Attentions, and a FNN composed of 128 neurons. For a more detailed visualization of the Vision Transformer, attention block, and multihead attention, please refer to Appendix A, Figure <ref>.
The Fourier neural operator (FNO) <cit.> model leverages the fast Fourier Transform to parameterize the integral kernel directly in the Fourier space. The implementation of FNO for the 2D Darcy Flow problem as presented in <cit.> is followed in this study. The total amount of parameters of FNO corresponds to 2.38 million, that is 15 times more than UNet+ViT (151k) and 17 times more than UNet (137k).
§ RESULTS
§.§ Forward problem with sparse observations
This section presents the prediction of the hydraulic head at sparse monitoring wells after a constant 10-day pumping period under two different training conditions. We employ distinct sampling strategies for both input and output data in our methodology. Our training data is sampled from a regular quadratic grid, while for testing we have explored other arrangements, such as radial and helical, to understand their potential impact on the prediction performance.
In the first scenario, training is conducted using sparse data, with a spacing of 20 grid points for the input hydraulic conductivity field and a spacing of 8 for the output hydraulic head. Testing is then carried out on sparse data points, following the radial and helical patterns delineated in subsection <ref>. The resulting root mean square error (RMSE) is found to be 5.2 × 10^-2, 3.5 × 10^-2 and 8.1 × 10^-2 for the vanilla U-Net, the UNet+ViT models and FNO respectively. These results underline the superior performance of the UNet+ViT model in handling sparse data, exhibiting a lower RMSE compared to both the vanilla U-Net and the FNO models.
In contrast, when training is performed using the entire field and testing on the same sparse dataset, the error marginally escalates to 3.9 × 10^-1 for FNO, 3.8 × 10^-1 for UNet and 3.6 × 10^-1 for UNet+ViT model. This outcome is anticipated considering the training set exhibits sparsity in the first scenario, but not in the latter. Additionally, Figure <ref> displays the prediction over the entire domain, resulting in a lower RMSE of 1.0 × 10^-2 for FNO, 1.7 × 10^-2 and 1.9 × 10^-2 for the vanilla U-Net and UNet+ViT models, respectively. The FNO model, while superior when dealing with full data, exhibits the highest predictive error under sparse data observations. These results highlight the practical advantages of the U-Net and especially UNet+ViT model in real-world scenarios for which data sparsity is common.
It should be noted that traditional simpler neural networks and other machine learning techniques may not provide adequate solutions for this specific problem. This assertion is backed by a comparison of the results from a fully connected neural network, a linear regression model and a random forest, detailed in Appendix <ref>. Despite the substantial number of trainable parameters, reaching 51.17 million, inherent to the fully connected neural network and the application of linear regression and random forest, these methods significantly underperform compared to the U-Net, the UNet+ViT models, and FNO.
§.§ Identification of pumping wells
In this section, we focus on an inverse problem: specifically the identification of pumping wells. This task requires determining the locations and rates of pumping wells based on the observed hydraulic heads. Throughout these experiments, we employ a single hydraulic conductivity field, which, while spatially varying, remains identical across all samples within the dataset.
In evaluating the performance of our models, we use both RMSE and accuracy. The RMSE calculates the average difference between the true and the predicted value for each pump location in the test dataset, giving a quantitative measure of the prediction error. Complementing this, the accuracy was determined by counting the proportion of correct pump predictions, where a prediction is considered correct if the predicted and actual pump locations align. This gives a sense of how often the model correctly identifies the location of pumps.
The U-Net model performs optimally, achieving an RMSE of 5.6 × 10^-2. Interestingly, the integration of the Vision Transformer with the U-Net model does not confer any additional precision in this scenario, yielding a near RMSE of 6.1 × 10^-2. The FNO model exhibits a higher RMSE of 1.1 × 10^-1, indicating a somewhat lower accuracy in identifying the pumping well locations.
To visually illustrate these results, Figure <ref> presents a test sample using the U-Net + ViT model. It demonstrates an accuracy of 93% in locating the pumps, calculated across the entire test dataset. The figure visualizes the model's ability to accurately identify the positions and the pumping rate of the wells. In comparison, the FNO model achieved a notably lower detection accuracy of 79% in the same task.
§.§ Example results for time series data
This section unveils the results achieved from the analysis of time series data, starting with a simplified scenario, for which the inputs are the varying hydraulic conductivity field and the pumping rate of a single pump which varies over a 10-day simulation period. Results are evaluated in terms of root mean square error (RMSE) with a focus on the comparison of different configurations of the U-Net architecture with transformers.
Figure <ref> presents a comparison of results over 5 time frames for the U-Net with the Vision Transformer under autoregressive testing conditions.
The RMSE for each method was calculated to quantify the models' performance. The U-Net architecture alone yielded an RMSE of 1.79 × 10^-2. When supplemented with a Vision Transformer, consisting of 2 attention blocks and 2 heads, the performance improves, registering an RMSE of 1.67 × 10^-2. However, increasing the complexity of the Vision Transformer to 8 blocks and 8 heads did not further improve the performance, instead, it led to a slight degradation in the RMSE (1.77 × 10^-2). Adding an Axial Transformer <cit.> to the U-Net architecture also did not enhance the performance, yielding an RMSE of 1.83 × 10^-2.
These results suggest that while adding a Vision Transformer to the U-Net architecture leads to performance improvement, increasing the complexity of the latent space does not necessarily do so.
§ CONCLUSION
This paper explores and evaluates the capabilities of different machine learning models, with a particular focus on U-Net, U-Net integrated with Vision Transformers (ViT), and Fourier Neural Operator (FNO), in the context of predicting hydraulic head in groundwater studies.
Our analysis and testing, conducted on synthetic datasets designed to simulate the conditions from the Overbetuwe region in the Netherlands and including scenarios with sparse data, firmly establish that both U-Net and U-Net + ViT models are particularly adept at dealing with such tasks. Importantly, these models are also preferred due to their fewer requisite parameters.
Specifically, in the case of sparse observation scenarios, the vanilla U-Net and the U-Net + ViT models outperformed the FNO model. In particular the performance of the UNet+ViT model was superior when handling sparse data, highlighting the potential of the model in real-world applications, where data scarcity is a common issue. The U-Net model demonstrated optimal performance in identifying pumping wells. Interestingly, the integration of the Vision Transformer with the U-Net model did not confer any additional accuracy in this scenario. As for the analysis of time series data, supplementing the U-Net architecture with a Vision Transformer improved the model performance, recording an RMSE of 1.67 × 10^-2 compare to 1.79 × 10^-2 of the vanilla U-Net. However, increasing the complexity of the Vision Transformer did not further enhance the model performance, indicating that a more complex architecture does not necessarily yield better results.
Future research will involve applying this validated methodology to real-world data, beginning with the Overbetuwe region in the Netherlands. This will offer an opportunity to further validate and refine the model, accounting for the sparsity and uncertainties inherent in real-world data.
§ BROADER IMPACT
The implications of this research span a wide range of potential societal impacts, with a primary focus on improving the efficiency and reliability of groundwater level forecasting. Given that groundwater is a crucial resource for approximately 2.5 billion people worldwide, fulfilling their daily water needs, and a significant source of global irrigation water, the importance of reliable forecasts cannot be overstated. Our work, through enhancing the performance of groundwater numerical models, offers an opportunity to revolutionize the management and distribution of this vital resource. By providing more accurate and data-efficient predictions, we can aid in the formulation of informed and sustainable water management strategies. This is particularly crucial considering the pressing challenges of population growth and climate change.
§ ACKNOWLEDGEMENTS
This work was carried out with support of the Leeds-York-Hull Natural Environment Research Council (NERC) Doctoral Training Partnership (DTP) Panorama under grant NE/S007458/1.
Our sincere appreciation is extended to Professor Karniadakis of Brown University. The financial assistance provided by the Leeds Institute of Fluid Dynamics and Deltares, which made possible the research visit to Brown University, is also gratefully acknowledged. Lastly, we would like to express our gratitude to the reviewers. Their critiques and suggestions have greatly enhanced the overall clarity of our work.
99
brakenhoff Brakenhoff, D. A., Vonk, M. A., Collenteur, R. A., Van Baar, M., & Bakker, M. (2022). Application of Time Series Analysis to Estimate Drawdown From Multiple Well Fields. Frontiers in Earth Science, 10.
modflow Hughes, J. D., Russcher, M. J., Langevin, C. D., Morway, E. D., & McDonald, R. R. (2022). The MODFLOW Application Programming Interface for simulation control and software interoperability. Environmental Modelling & Software, 148.
gupta2022multispatiotemporalscale Gupta, J. K., & Brandstetter, J. (2022). Towards Multi-spatiotemporal-scale Generalized PDE Modeling. arXiv preprint arXiv:2209.15616.
li2020fourier Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., & Anandkumar, A. (2020). Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895.
vito Ovadia, O., Kahana, A., Stinis, P., Turkel, E., & Karniadakis, G. E. (2023). ViTO: Vision Transformer-Operator. arXiv preprint arXiv:2303.08891.
transunet Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., ... & Zhou, Y. (2021). TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv preprint arXiv:2102.04306.
WEN2022104180 Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., & Benson, S. M. (2022). U-FNO—An enhanced Fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources, 163.
DINO_loket DINO loket. (2023). Retrieved from https://www.dinoloket.nl/en/subsurface-data
li2023transformer Li, Z., Meidani, K., & Farimani, A. B. (2023). Transformer for Partial Differential Equations' Operator Learning. arXiv preprint arXiv:2205.13671.
cao2021choose Cao, S. (2021). Choose a Transformer: Fourier or Galerkin. arXiv preprint arXiv:2105.14995.
dosovitskiy2021image Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv preprint arXiv:2010.11929.
ronneberger2015unet Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv preprint arXiv:1505.04597.
wen2022ufno Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., & Benson, S. M. (2022). U-FNO – An enhanced Fourier neural operator-based deep-learning model for multiphase flow. arXiv preprint arXiv:2109.03697.
francestudy Mellouli, N., Rabah, M. L., & Farah, I. R. (2022). Transformers-based time series forecasting for piezometric level prediction. In 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems (EAIS).
Wunsch_comparison Wunsch, A., Liesch, T., & Broda, S. (2021). Groundwater level forecasting with artificial neural networks: a comparison of long short-term memory (LSTM), convolutional neural networks (CNNs), and non-linear autoregressive networks with exogenous input (NARX). Hydrology and Earth System Sciences, 25(3), 1671-1687.
jiang2023fouriermionet Jiang, Z., Zhu, M., Li, D., Li, Q., Yuan, Y. O., & Lu, L. (2023). Fourier-MIONet: Fourier-enhanced multiple-input neural operators for multiphase modeling of geological carbon sequestration. arXiv preprint arXiv:2303.04778.
ho2019axial Ho, J., Kalchbrenner, N., Weissenborn, D., & Salimans, T. (2019). Axial Attention in Multidimensional Transformers. arXiv preprint arXiv:1912.12180.
seidman2022nomad Seidman, J. H., Kissas, G., Perdikaris, P., & Pappas, G. J. (2022). NOMAD: Nonlinear Manifold Decoders for Operator Learning. arXiv preprint arXiv:2206.03551.
deeponet Lu, L., Jin, P., Pang, G., Zhang, Z., & Karniadakis, G. E. (2021). Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3), 218-229.
vaswani2017attention Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention Is All You Need. arXiv preprint arXiv:1706.03762.
§ APPENDIX A
This appendix provides detailed diagrams of the model structures.
§ APPENDIX B
This appendix sets out to examine whether simpler machine learning models, specifically a fully connected neural network, a linear regression model, and a Random Forest model, can achieve the same level of accuracy as more advanced models like the U-Net, the UNet+ViT models, and FNO in predicting groundwater levels.
The particular Random Forest model tested here used 30 estimators. The fully connected neural network, employed for this comparison, comprises three hidden layers, each containing 1000 nodes and using ReLU activation functions. The model holds an impressive count of 51.17 million trainable parameters.
Unfortunately, none of the models was able to predict accurately the groundwater levels neither capturing the location of the wells. Specifically, the fully connected neural network and the linear regression model yielded high RMSEs of 1.17 × 10^-1 and 1.24 × 10^-1, respectively. The Random Forest model fared slightly better, achieving a lower RMSE of 1.02 × 10^-1, but it still fell short of the U-Net, the UNet+ViT models, and FNO.
Figure <ref> visually contrasts the predictions of these simpler models gainst the ground truth. Their significant underperformance becomes evident when compared to more sophisticated models. For a comparison of these results with accurate outcomes produced by the UNet+ViT model, the reader is directed to Figure <ref>.
|
http://arxiv.org/abs/2307.04166v1 | 20230709131847 | Parameter Identification by Deep Learning of a Material Model for Granular Media | [
"Derick Nganyu Tanyu",
"Isabel Michel",
"Andreas Rademacher",
"Jörg Kuhnert",
"Peter Maass"
] | cs.CE | [
"cs.CE"
] |
Parameter Identification by Deep Learning of a Material Model for Granular Media]Parameter Identification by Deep Learning of a Material Model for Granular Media
[1]Derick Nganyu [email protected]
2]Isabel Michel
1]Andreas Rademacher
2]Jörg Kuhnert
1]Peter Maass
[1]Centre for Industrial Mathematics (ZeTeM), University of Bremen, Bibliothekstrasse 5, Bremen, 28359, Bremen, Germany
[2]Fraunhofer Institute for Industrial Mathematics ITWM, Fraunhofer-Platz 1, Kaiserslautern, 67663, Rhineland-Palatinate, Germany
Classical physical modelling with associated numerical simulation (model-based), and prognostic methods based on the analysis of large amounts of data (data-driven) are the two most common methods used for the mapping of complex physical processes. In recent years, the efficient combination of these approaches has become increasingly important. Continuum mechanics in the core consists of conservation equations that – in addition to the always necessary specification of the process conditions – can be supplemented by phenomenological material models. The latter are an idealized image of the specific material behavior that can be determined experimentally, empirically, and based on a wealth of expert knowledge. The more complex the material, the more difficult the calibration is. This situation forms the starting point for this work's hybrid data-driven and model-based approach for mapping a complex physical process in continuum mechanics. Specifically, we use data generated from a classical physical model by the MESHFREE software <cit.> to train a Principal Component Analysis-based neural network (PCA-NN) for the task of parameter identification of the material model parameters. The obtained results highlight the potential of deep-learning-based hybrid models for determining parameters, which are the key to characterizing materials occurring naturally, and their use in industrial applications (e.g. the interaction of vehicles with sand).
[
[
August 12, 2023
===================
§ INTRODUCTION
In engineering, natural sciences, and industry, partial differential equations (PDEs) are widely used to model a great variety of problems. They are a great tool for modeling and solving complex phenomena ranging from the motion of incompressible fluids to the electronic structure of materials, just to name a few. Usually, these models follow the full life cycle of products from classical simulation and optimization during the development phase to process monitoring and control during production. PDE models generally introduce some critical parameters, which have to be calibrated so that the model reflects the system or problem being considered. These parameters could be scalar or space and time-dependent parameter functions, and their calibration process usually requires multiple runs of the model. In some scenarios, one has access to the solution of the PDE or observation of the system and wishes to infer the parameters underlying the governing PDE, thus an inverse problem. A wide range of inverse problems have been studied, such as tomography <cit.>, inverse kinematics <cit.>, and inverse problems in signal processing <cit.> and even in quantum mechanics<cit.>. However, PDE-based inverse problems are one of the most challenging inverse problems. The complexity of PDE-based inverse problems is compounded by the fact that their solutions are typically nonlinear. This further emphasizes the need for efficient and fast solvers.
While traditional or standard numerical methods such as finite differences and finite elements have been used extensively to solve PDEs, most if not all these standard PDE solvers suffer from the curse of dimensionality <cit.>, i.e. the computational cost grows exponentially as the dimension increases. This has led to the extensive study of data-driven concepts, particularly, neural network approaches for solving PDEs over the last few years. In addition to their potential of overcoming the curse of dimensionality, these data-driven concepts usually have the potential to complete mathematical-physical models as even the finest detail or tricky non-linearity is contained in a sufficient dataset. Also, since the parameters to be determined most often are not arbitrary, but follow an unknown, application-specific distribution, the training data provides a means to recover and exploit this distribution.
This paper looks at a PDE-based inverse problem in the field of continuum mechanics, which is applicable to the automobile development process. Specifically, our focus is on a physical model of soil over which vehicles ride. The rest of this work is structured as follows: We continue in Section <ref> by looking into reduced order models (ROM) and how proper orthogonal decomposition (POD) as well as deep learning (DL) can be used in ROMs. We equally highlight in Section <ref>, how neural networks have been recently applied for PDE solutions, parametric studies, and inverse problems. We then proceed to present the defining equations of our problem in Section <ref> and the laboratory test setting, which provides the basis of the MESHFREE simulations <cit.> used for the data generation. Section <ref> presents the method used to approach the problem, i.e. PCA-NN. In Section <ref>, we summarize the numerical results, followed by concluding remarks in Section <ref>.
§.§ Reduced Order Models and POD/PCA
Full-order models (FOM) like the finite difference method (FDM), finite element method (FEM), finite volume method (FVM), discontinuous Galerkin method (DGM), etc. that discretize the PDEs are usually highly accurate but very expensive. Depending on the application and the goals set, the user has to balance accuracy and computation time as an algorithm of higher accuracy implies higher computation time. In FDM, for example, a finer discretization of the domain (grid) leads to higher accuracy. The result of this is a system of linear equations with many more unknowns/parameters (i.e. the solution vector has a higher dimension); thus, a larger matrix system has to be solved to obtain the PDE solution on this fine grid. This is a major setback for real-time applications, and other settings where the PDE has to be queried multiple times. Reduced Order Models (ROM) offer a solution as they seek to reduce the dimension of the solution vector while maintaining the problem's physical features. The Reduced Basis (RB) method, which has received a lot of attention in the last decade <cit.> but can be traced back to the 1980s <cit.>, is unarguably one of the most popular ROM. This method consists of an offline and an online stage. During the offline stage, a reduced basis is obtained from a good choice of parameters, and this is used to obtain solutions of the PDE for new parameters. This is very similar to neural operator methods for solving PDEs like Fourier Neural Operator (FNO) <cit.> and Deep operator network (DeepONet) <cit.>. The RB method can also be extended for parameter identification tasks <cit.> as well as inverse problems <cit.>.
Recently, Deep Learning-based reduced order models (DL-ROM) have been popularized to efficiently solve PDEs <cit.>. Just like the RB method, they consist of an offline (training) phase and an online (testing) phase. The DL-ROM, though time-efficient during testing, might be very costly during training due to the high number of features or dimensions of the input and/or output – similar to RB method. The consequence of this is usually a network with more parameters, and thus more time is needed for optimizing these parameters. A common solution that reduces the number of network parameters while maintaining or even improving the accuracy is the proper orthogonal decomposition (POD). In the field of machine learning, this is commonly known as Principal Component Analysis (PCA), used as a technique for dimensionality reduction <cit.>. Reduced order models constructed with both deep learning and POD are referred to in <cit.> as POD-DL-ROM, where accuracy-wise, they are reported to outperform state-of-the-art POD-Galerkin ROMs during the testing stage; and efficiency-wise, they outperform DL-ROMs during the training stage.
§.§ Neural Networks and PDEs
Neural Networks have shown interesting results in dealing with high dimensional complex PDEs <cit.>, where they overcome the curse of dimensionality for the Schrödinger equation and Hamilton–Jacobi–Bellman equation <cit.>, Black–Scholes equations <cit.>, and Kolmogorov equations <cit.> which arise in option pricing <cit.>.
The popularity of neural networks in solving PDEs probably comes from the famous Physics-informed neural networks in <cit.> that use a neural network to approximate a function, i.e. the solution of the PDE for a single parameter instance. Similar works include quadratic residual networks <cit.> and Deep Ritz networks <cit.>.
Another class of neural networks – probably closer in its operation to RB methods – approximate an operator by a neural network. They are known as neural operators and can be used to query solutions of different parameter instances when trained. The PCA-based neural operator <cit.>, FNO, DeepONet are part of this class as well as other novel methods and `variants' like the Multiwavelet-based operator <cit.>, graph neural operator <cit.>, wavelet neural operator <cit.>, and many more. <cit.> provides a good overview and extends them for parametric studies as well as inverse problems.
§ PROBLEM FORMULATION
To shorten the design cycle of vehicles and reduce the cost of development, the automotive industry employs numerical simulation tools in the vehicle development process for testing and analysis. In this application example, we are interested in the interaction of vehicles with various roadbeds such as sand, snow, mud, etc. Vehicle stability depends largely on this interaction, and the safety of the passengers is thus a concern. To approach this problem, a full-body model of the vehicle dynamics is needed as well as proper modeling of the roadbed. Of interest to us, is the modeling of the roadbed consisting of granular material. This is a continuum mechanics problem that involves not only the well-known conservation equations of mass, momentum, and energy, but also a supplementary phenomenological material model. While the former specify the process conditions and are generally well understood, the latter relates the applied strain to the resulting stress and comes with uncertainties as well as non-linearities. Obviously, the overall goal is for the simulations to match the real-life experiments, thus the selected material model is of great importance.
§.§ Barodesy Model
Material models have parameters that are specific to the considered material as well as its reaction to external conditions, and these models range from simple to complex. By using single-parametric models for the granular material (roadbed), for example, the deviation between simulations and experiments increases as the simulation time progresses. As a result, complex material models with many more parameters are used. Such parameters are usually determined by a great wealth of expert knowledge, and costly experiments. The barodesy model <cit.> is one of such complex material models which conforms to the basic mechanical properties of the material. It is formulated in tensorial form by Equations (<ref>)–(<ref>)
d 𝐒/d t = 𝐖 𝐒-𝐒𝐖+𝐇(𝐒, 𝐃, e)
d e/d t = (1+e) ·tr(𝐃) ,
with
𝐃 = 1/2(∇𝐯^T+(∇𝐯^T)^T)
𝐖 = 1/2(∇𝐯^T-(∇𝐯^T)^T)
and
𝐇(𝐒, 𝐃, e) = h_b(σ) ·(f_b 𝐑^0+g_b 𝐒^0) ·|𝐃|,
where
σ = |𝐒|=√(tr(𝐒^2))
𝐒^0 = 𝐒/|𝐒|, 𝐃^0=𝐃/|𝐃|, 𝐑^0=𝐑/|𝐑|
𝐑 = tr(𝐃^0) ·𝐈+c_1 ·exp(c_2 ·𝐃^0)
h_b = σ^c_3
f_b = c_4 ·tr(𝐃^0)+c_5 ·(e-e_c)+c_6
g_b = -c_6
e_c = (1+e_c0) ·exp(σ^1-c_3/c_4 ·(1-c_3))-1.
In the above expressions:
- 𝐒∈ℝ^3 × 3 is the Cauchy stress tensor (with principal stresses σ_1, σ_2, σ_3 in axial and lateral directions),
- 𝐖 is the antisymmetric part of the velocity gradient,
- 𝐃 is the stretching tensor (the symmetric part of the velocity gradient),
- e= V_p/V_s is the void ratio with critical void ratio e_c, where V_p and V_s are the volume of pores and solids (grains).
- 𝐯∈ℝ^3 is the velocity field.
The non-linear function 𝐇 introduces the material parameters c_1, c_2, c_3, c_4, c_5, c_6, and e_c0 which we seek to identify via deep learning in a supervised learning task, provided the stress is known. For Hostun sand <cit.>, for example, c_1 = -1.7637, c_2 = -1.0249, c_3 = 0.5517, c_4 = -1174, c_5 = -4175, c_6 = 2218, e_c0 = 0.8703.
§.§ Oedometric Test
In soil mechanics, laboratory tests are used to measure the physical and mechanical properties of soil. They enable the testing and validation of material models. The tests vary from soil classification, shear strength, consolidation, and permeability tests, etc. <cit.>. The consolidation or oedometric test is one of the most conducted tests in soil mechanics. The soil (material) sample is loaded as well as unloaded in axial direction and rigid side walls prevent any lateral expansion, see Figure <ref>. With this, the soil's consolidation properties can be measured.
The laboratory measurements of oedometric tests result in stress paths (relating lateral and axial stress) and stress-strain-curves, e.g. in axial direction illustrated in Figure <ref>. These are compared to corresponding element tests wrt. a material model such as barodesy, in which the material model is integrated for one numerical point. When evaluating the quality of 3D numerical methods, only the comparison with corresponding element tests should be made, since the numerics cannot be better than the material model itself. This was investigated, for example, in <cit.> for the MESHFREE software (see Section <ref>), at that time still referred to as Finite Pointset Method (FPM).
§.§ MESHFREE and the Generalized Finite Difference Method (GFDM)
We employ the Generalized Finite Difference Method (GFDM) <cit.> implemented by Fraunhofer ITWM in the MESHFREE software <cit.> to numerically solve coupled PDEs governed by the conservation equations and material models such as the barodesy model described in Section <ref>. MESHFREE has successfully been applied for the simulation of complex continuum mechanics problems in industry, like vehicles traveling through water<cit.>, flow inside impulse-type turbines <cit.>, solution mining <cit.>, injection molding <cit.>, wet metal cutting <cit.>, and phase change processes <cit.>.
§.§.§ Point Clouds and Generalized Finite Difference Approximation
An overview on point cloud generation for meshfree methods is given in <cit.>. MESHFREE employs an advancing front procedure <cit.> that first discretizes the boundary and then iteratively the interior of the continuum domain depending on a given point interaction radius. Each point carries the physical information (such as velocity, pressure, temperature, stress, etc.) and is moved with the continuum velocity in a Lagrangian formulation <cit.>. Distortions caused by the movement can be corrected purely locally by adding and deleting points.
Discretizing the governing PDEs in their strong formulation, GFDM generalizes classical finite differences to (scattered/irregular) point clouds. Thereby, all numerical derivatives (function values, x-, y-, z-derivatives or Laplacian) are computed as linear combination of neighboring function values, where the neighbors of a point are determined by the point's interaction radius. The necessary coefficients/stencils are computed by a weighted least squares method. For more details on generalized finite difference approximation we refer to <cit.>.
§.§.§ Data Generation
Using the MESHFREE software, we generate parameters-stress pairs to train our neural network. Here, we use the physical and numerical model presented in <cit.> including corresponding boundary conditions for the cylindrical oedometric test. As described in <cit.>, the axial stress on the 3D point cloud (Figure <ref>) is averaged over all points of the sample to determine the resulting data for a parameters-stress pair, see Figure <ref>. For simplicity, the representation in this figure is dependent on time and not on axial strain as in Figure <ref>. Note that we use the settings for the dense sample in <cit.> with fixed interaction radius h=0.01 m, loading/unloading rate v_p=∓ 0.001 m/s, and fixed time step size Δ t = 0.0015 s for all parameters-stress pairs.
The choice of the parameters that constitute the data set are selected uniformly within predefined intervals. Guided by expert knowledge (see Section <ref>), a base value is selected and the interval is constructed around it by adding and subtracting 5 % of this base value to obtain the lower and upper bounds of this interval as shown in Table <ref>.
§ PROPOSED METHOD
The proposed method is inspired by both Reduced Order Models (ROM) and Neural Networks (NN). ROMs have been popular for a long time in dealing with PDEs, and even more when dealing with parameter identification problems, as outlined in Section <ref>. NNs have become popular over recent years not only due to their success in computer vision <cit.>, natural language processing <cit.>, but also due to the availability of data and growing computing power <cit.>. The efficient combination of both methods <cit.> has already achieved remarkable results not only in simple problems but also in more complex problems such as cardiac electrophysiology <cit.> (where the use of proper orthogonal decomposition (POD) further improves the results <cit.>), fluid flow <cit.>, non-linear models <cit.>, etc.
§.§ PCA-NN
We implement a variation of the PCA-NN architecture presented in <cit.>, which uses a meshless operator for the evaluation of the solution of a PDE by combining ideas of ROM with deep learning. First, for given training data (λ_i,u_i), obtain a model reduction by the use of principal component analysis (PCA) for both the input (parameter λ) and output (solution u). Only the coefficients of a finite number of PCA components are retained. Thus, PCA reduces the dimensions of both the input and output spaces to finite dimensional latent spaces. Second, use a NN to map the coefficients of the respective representations in these latent spaces.
The evaluation of this operator approximation for a novel parameter λ is highly efficient: compute the scalar products with the specified finite number of PCA components, map these coefficients to the latent coefficients of the output space with the NN, approximate the solution of the PDE by an expansion using these coefficients and the PCA on the output side. A simplified architecture of this method is shown in Figure <ref>.
The formulation of this approach is in a function space setting and hence mesh-free. For implementation purposes, however, we have to specify how to compute the scalar products with the PCA components. These are only given numerically, usually by their values specified at discrete points (in our case time steps).
This PCA-NN operator has been used in <cit.> in a multiscale plasticity problem to map strain to stress.
§.§ Workflow
In our problem, the goal is to learn the parameters μ∈ℝ^d_μ, with d_μ = 7, from the variation of the axial stress -σ_1 ∈ℝ^d over time t, where d = 675 is the fixed number of time steps corresponding to Δ t = 0.0015, see Section <ref>. The data set generated from MESHFREE is therefore a vector pair (μ,-σ_1).
Our adopted procedure can be broken down into four major steps as illustrated in Figure <ref>:
* Data Generation: Using MESHFREE and the setup described in Section <ref>, generate parameters-stress pairs (μ^i, -σ_1^i) with i = 1,2,…,N_train+N_test. These are snapshots of the full order model that is based on the GFDM described in Section <ref>.
* Training (Offline Stage): The first N_train data pairs are used to train the PCA-NN neural network. During training, the average L^2-loss
L_i (μ, μ̂) = 1d_μ∑_ℓ=1^d_μμ^i_ℓ - μ̂^i_ℓμ^i_ℓ_2
is obtained and its average over the training data
L(μ,μ̂) = 1N_train∑_i=1^N_train L_i(μ, μ̂)
is optimized, see Algorithm <ref> in Section <ref> for further details. μ̂ is the output of the model which is a composition of PCA applied on the axial stress -σ_1 followed by the neural network.
* Testing (Online Stage): Once the network is trained, it is used for testing with the next N_test unseen data. Testing proceeds as shown in Algorithm <ref> in Section <ref>. The network's performance is evaluated with the loss function given in Equation <ref>, but averaged over the N_test parameters by
L(μ, μ̂) = 1N_test∑_i=1^N_test L_i(μ, μ̂).
* Verification Stage (optional): This stage is used to ascertain the efficiency of the proposed model. Here, the material parameters μ̂ learned from the neural network are used as input to MESHFREE simulations, in order to compare the resulting stress -σ̂_1 with the stress -σ_1 obtained from the ground truth parameters μ. The difference is measured using the relative L^2-error given by
E(-σ_1, -σ̂_1) = 1N_test∑_i=1^N_test E_i(-σ_1, -σ̂_1),
where
E_i(-σ_1, -σ̂_1) = σ_1^i - σ̂_1^iσ_1^i_2.
§.§.§ Network Architecture
In our numerical examples, we followed the outline described in <cit.> and used a fully connected feed-forward neural network (FCN) for the mapping of the stress latent space (output of PCA on stress) to the parameters. The number of nodes per layer starts from d, 500, 1000, 2000, 1000, 500, and finally d_μ (which is the number of parameters to be learned, here 7). d is considered a hyperparameter, which has to be tuned. In our case, d = 50 lead to the best results. For the PCA, we use standard randomized singular vector decomposition (SVD) implementations <cit.>. Figure <ref> illustrates the overall PCA-NN architecture.
§.§.§ Algorithm
As a purely data-driven method, no physics or PDE is needed in the training of the neural network. However, the data used to train the network is obtained from MESHFREE's GFDM for solving the underlying PDE. By training the network with these numerically-given input-output pairs, we obtain a neural operator that solves the PDE for various instances irrespective of the underlying discretization.
We specify the algorithm for the continuum mechanics problem described in Section <ref> using the barodesy model. The training data is the pair (μ^i,-σ_1^i), with each μ^i∈ℝ^d_μ and -σ_1^i ∈ℝ^d. Training then proceeds as in Algorithm <ref>, while testing of the trained network proceeds as in Algorithm <ref>.
§ NUMERICAL RESULTS
Randomized PCA is used to reduce the dimensions of the principal stress component from 675 to 50. This reduced dimension is the input to the FCN similar to that in Section <ref>.
Because the parameter space is low enough, there is no need of a PCA after the FCN. The output of the FCN yields the target parameters directly. Of the 6000 data pairs generated, 75% is used for training. During training, the relative L^2-error of the individual parameters is evaluated and their average is the loss function minimized for optimizing the parameters of the neural network. However, due to the nature of this loss function, the learning of the parameters of higher magnitude is favored during training as can be seen in Figure <ref>. We observe that the loss for parameters c_4, c_5, and c_6 (that are all of the order of 1000) is minimized, while for the other parameters (that are of the order of 1) the loss is almost not minimized. As a remedy, the parameters of lower magnitude are scaled such that they are of the same order (of 1000) as the parameters of higher magnitude. In this way, learning of all individual parameters is achieved as shown in Figure <ref>. Figure <ref> illustrates the overall loss as average of the individual losses.
We obtained an average relative L^2-error of 2.63 × 10^-3 on the test data set. Figure <ref> shows the comparison of the ground truth (input to the MESHFREE simulation in blue) and the learned parameters (PCA-NN in orange) for four randomly selected examples. The learned parameters of these four examples were further used in a verification step in order to compare the resulting MESHFREE output axial stress with that produced by the ground truth parameters. The average relative error obtained was 4.12 × 10^-3. This is illustrated in Figure <ref> (top), where there is an obvious overlap of the axial stresses from the learned parameters with those from the ground truth parameters. Figure <ref> (bottom) shows the corresponding relative L^2-errors.
§ CONCLUSIONS AND OUTLOOK
The presented results highlight the potential of deep learning in continuum mechanics, specifically in material parameters identification for complex material models – a task that up till now depends heavily on expert knowledge if not trial and error. By exploiting deep learning methods, we obtain the model parameters from MESHFREE simulations. It will be equally interesting to see how the results change when experimental data is used instead of or in addition to simulation data.
The proposed method is an important first step since simulation and experimental results are almost always noisy in real-life problems. An interesting future study will be to look at the effect of different noise levels on the neural network's strength in parameter identification. This is a common practice in the field of inverse problems. For example, <cit.> studied the effects of noise on both function-approximating networks and neural operators for PDEs. There, the PCA-based method – when fed with noise – did not deviate so much from the noiseless case for increasing noise level. This is also promising for our application problem.
Acknowledgments
The authors are funded by the German Federal Ministry of Education and Research (BMBF) in the project HYDAMO. The authors would like to thank the MESHFREE team at Fraunhofer Institute for Industrial Mathematics ITWM for their support.
|
http://arxiv.org/abs/2307.06247v1 | 20230712153740 | Electronic ground-state hysteresis under magnetic field in GdMn$_2$O$_5$ | [
"Balédent V",
"Vaunat A.",
"Petit S.",
"L. Nataf",
"Chattopadhyay S.",
"Raymond S.",
"Foury-Leylekian P"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mtrl-sci"
] |
Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405, Orsay, France.
[Corresponding author: ] [email protected]
Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405, Orsay, France.
Synchrotron SOLEIL, L'Orme des Merisiers, Saint Aubin BP 48, 91192, Gif-sur-Yvette, France
Laboratoire Léon Brillouin, CEA, CNRS, Université Paris-Saclay, 91191, Gif sur Yvette, France
Laboratoire Léon Brillouin, CEA, CNRS, Université Paris-Saclay, 91191, Gif sur Yvette, France
Synchrotron SOLEIL, L'Orme des Merisiers, Saint Aubin BP 48, 91192, Gif-sur-Yvette, France
Dresden High Magnetic Field Laboratory (HLD-EMFL), Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany
UGC-DAE Consortium for Scientific Research Mumbai Centre, 246-C CFB, BARC Campus, Mumbai 400085, India
Université. Grenoble Alpes, CEA, IRIG, MEM, MDN, 38000 Grenoble, France
Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405, Orsay, France.
In this paper, we investigate the physical properties of the type II multiferroic material by means of neutrons scattering, electric polarization and magnetization measurements. A complex (T,H) phase diagram shows up, with especially a field induced magnetic transition around 12 T at low temperature. The high field phase is accompanied by an additional electric polarization along both the a and b directions, as authorized by symmetry, but never observed experimentally up to now. While the magnetic properties recover their initial states after driving the field back to zero, the polarization along a shows a significant increase. This behavior is observed for all directions of the magnetic field. It constitutes a novel and striking manifestation of the magneto-electric coupling, resulting in the establishment of a new ground state at zero magnetic field.
Electronic ground-state hysteresis under magnetic field in
P. Foury-Leylekian
August 12, 2023
============================================================
§ INTRODUCTION
The search for new materials with remarkable properties is a major concern for many condensed matter physicists. In this quest, a simple idea consists in combining, within the same material, two different properties, even those which are a priori mutually exclusive. Magneto-electrical multiferroics are one of the outstanding examples. These materials allow for the manipulation of the magnetic state using an electric field via the coupling between ferroelectricity and magnetism, which represents a great potential in the field of spintronics or information storage. Different routes have been explored to obtain such properties. Artificial materials, such as heterostructures <cit.> for instance, consist in alternating layers with different properties. The coupling is then induced by proximity effect. Another extensively studied route focuses on materials whose magnetic and ferroelectric properties coexist in the bulk. Even if quite difficult to find, their list is getting longer and longer. If ferroelectricity and magnetism have a distinct microscopic origin, these materials are called type I multiferroics. One of the main interests of this family is to offer multiferroic properties at room temperature, as in BiFeO_3, but with the disadvantage of a weak coupling inherent to the distinct origin of the two orders. In type II multiferroics, ferroelectricity is induced by magnetism, the two orders are then intrinsically strongly coupled, as in RMn_2O_5 manganites for instance. This last variety of compounds has received much attention because of the fundamental problem posed by the origin of this intrinsic coupling, which results in a complex ground state where lattice, electronic and magnetic degrees of freedom are entangled.
In this study, we report the possibility of manipulating the electronic ground state of a multiferroic material in a non-reversible manner using a magnetic field. We show that a first order magnetic transition occurs at 12 T, involving both charge and spin degrees of freedom. While the electronic properties exhibit significant hysteresis, the magnetism exhibits only a weak 1 T wide loop. This opens up a new avenue for creating a new functional ground state in multiferroic materials.
The type II RMn_2O_5 multiferroic family attracted attention for multiple reasons. First, while the average space group is Pbam with a=7.2931 Å, b= 8.5025 Å, c=5.6743 Å, this family crystallizes in a non-centrosymmetric structure at room temperature <cit.>. The actual space group Pm, with the mirror perpendicular to the c axis of Pbam effectively allows for electrical polarization in the (a,b) plane. While such room temperature polarization has not been confirmed experimentally, it has been calculated using DFT <cit.>. This polarization shows two components, electronic and ionic, whose amplitudes depend on the path followed in the temperature-electric field phase diagram. Second, one of the magnetic transitions, below T_N= 33K, is accompanied by the rise of an additional contribution along the b direction <cit.>, that adds to the initial polarization. It has been proven experimentally that the exchange-striction mechanism is at play in this family <cit.>. This additional spin-induced contribution to the polarization is maximized in , where it culminates at 360 nC.cm^-2. Third, magnetic field has proven very efficient in modifying the electronic ground state in the whole RMn_2O_5 family. Indeed, a modulation of polarization has been observed below 2 T in <cit.> and the emergence of a polarization together with ferromagnetism has been reported in PrMn_2O_5 above 15 T <cit.>. All these reasons motivate the present high magnetic field study of .
The possibility to modify electronic properties with magnetic field along a has been reported recently <cit.>. The authors propose that the magnetic field can induce a transition between four different topological magnetoelectric states, formed by the two zero field states and the two high field ones. The orientation of the magnetic field is also important. For a field purely along the a direction, the final state is identical to the initial state, the polarization along b is unchanged and the magnetic state either. For a field at 10 degrees from the a-axis in the (a,b) plane, the initial and final states however are different, changing the polarization along the b-axis. The magnetic structure is also modified, but is experimentally indistinguishable from the initial structure: there are two identical degenerate states respecting the very same symmetries and magnetic space group. The existence of these two equivalent magnetic structures at zero field giving rise to different polarizations had already been pointed out by DFT calculations<cit.>. The possibility to switch from one to the other using a magnetic field along the a-axis to modify the polarization along the b-axis seems to be linked to the topological properties of these states. We present here a combination of magnetization, electric polarization and neutron diffraction to investigate further this mechanism.
§ RESULTS
§.§ Magnetization
We performed magnetization measurements with the magnetic field applied along the three crystallographic axes of GdMn_2O_5. These field-sweep measurements were carried out up to ∼57 T at 2 K using pulsed magnetic fields available at the Dresden High Magnetic Field Laboratory (HLD-EMFL). The duration of each pulse was ∼20 ms. Reproducibility of the data was ensured by repeating the measurements for each direction. As can be observed in Fig. <ref>, the magnetization along the b direction shows a clear hysteresis loop around 10 T. Such hysteresis is also visible for the field along a and c around 5 and 11 T respectively. This corroborates the results obtained for the a direction in ref <cit.>.
§.§ Electric polarization
Motivated by the strong magneto-electric coupling in this material, we turned to the evolution of the electric polarization P under magnetic field. Similar to the magnetization, the measurements were also performed up to ∼57 T using a pyroelectric technique <cit.> at the HLD with a pulse duration of ∼20 ms. Both sides of the a, b, and c thinned samples were covered using silver paste with gold contact wires attached to each of the sides. For each direction of the magnetic field we measured the components of the electric polarization along the 3 crystallographic directions : in total we performed 9 configurations at 2 K. The field variations of the pyroelectric current I were recorded by measuring the voltage variation across a 1 MΩ shunt resistor connected in series with the measurement circuit by a digital oscilloscope (Yokogawa DL750). To calculate the field dependent electric polarization for each of the nine configurations, we integrated the I(H) data. All the measurements were repeated to verify the reproducibility of the data.
Results are shown in Fig. <ref>. First, regardless of the direction and intensity of the magnetic field, no polarization has been measured along the c direction (P_c ≡ 0, blue curves). The observed small variations are most likely due to imperfect alignment of the crystal and the electrical contacts. As expected owing to the Pm space group with the mirror perpendicular to c axis, the electric polarization lies within the (a,b) plane (P_a and P_b 0). P_b (green curve), however shows a step-like increase, above 12, 10 and 15 T for increasing field along a, b and c respectively. Decreasing the field from 57 T, the polarization goes back to its initial value with a hysteresis loop.
For H along b, this behavior is similar to the one observed in magnetization at the same critical field around 10 T. Hence, the high field magnetic phase induces an additional ferroelectric component P_b=4 nC.cm^-2, on top of the one initially present at zero field (360 nC.cm^-2). Interestingly, such increase in P_b is also visible for magnetic fields along a and c.
It's worth noting that only P_b polarization has been experimentally measured at low temperature in the CM1 phase. Although the Pm space group also allows non-zero P_a polarization, this has never been demonstrated experimentally. Applying a magnetic field changes the game : as can be seen in Fig. <ref>, P_a increases with increasing the field, similarly to P_b. Even more astonishing, with decreasing the field back to zero, P_a does not go back to zero, but shows a residual polarization Δ P_a between 6 and 15 nC.cm^-2 at 0 T depending on the field orientation and thus around 10 nC.cm^-1 in average.
§.§ Neutron scattering
In order to investigate further the magnetic properties under magnetic field of GdMn_2O_5, the temperature-magnetic field phase diagram was investigated by means of single crystal neutron diffraction. The experiment was carried out on the IN12 cold triple axis spectrometer, a CEA-Juelich CRG installed at ILL (Grenoble, France). Since the magnetic propagation vector is (0.5, 0, 0) or (0.5, 0, δ) depending on the temperature in all members of the RMn_2O_5 family, the natural scattering plane is the (a,c) plane, allowing access to magnetic peaks of the form (H,0,L). The magnetic field was applied along the vertical direction, hence the b direction. The evolution of selected reciprocal space regions was then recorded as a function of magnetic field and temperature, leading to the schematic phase diagram shown in Fig. <ref>. Several structures appear, characterized by different Bragg reflections and that we shall put in different categories:
* q_ICM1=(3/2,0,ϵ≈ 0.19) (gray), q_CM1=(3/2,0,0) (red) and q_ICM2=(3/2,0,δ≈ 0.43) (blue) are characteristic of the zero field, high, intermediate and low temperature structures respectively, already reported in literature <cit.>. The incommensurability along the c axis indicates that the magnetic moments wrap around the c axis, forming a helicoidal type structure, and reflecting an exchange frustration along this direction. It is worth mentioning that the q_CM1 phase is fragile: strikingly, the spin wave dispersion, measured on the very same crystal in the 5 to 30 K temperature range <cit.>, does not go soft at q_CM1 but at q_ICM2, indicating that the incommensurate phase is the incipient ground state.
* q_FM=(1,0,0) (green) reflects a ferromagnetic component, imposed by the field, with magnetic moments also along the applied magnetic field.
* q_CM2=(3/2,0,1/4) (magenta) and q_ICM3=(3/2,0,0.2-0.3) (yellow) are characteristic of field induced high and low temperature structures respectively. The magnetic structure inferred from q_CM2 is a quadrupling of the q_CM1 magnetic unit cell along the c-axis, and can also be seen as a commensurate lock-in of the periodicity along the same direction with respect to the q_ICM2 phase. Interestingly, this q_CM2 magnetic propagation vector corresponds to the ground state of the low temperature magnetic phase observed in numerous other members of the family (R=Tb, Dy, Ho, Er, Tm and Yb).
Overall, and apart from the appearance of the ferromagnetic component, the effect of the field is to restore the q_CM2=(3/2,0,1/4) magnetic phase, common to the other members of the RMn_2O_5 family. Here, such cycloidal structure wraps around the c axis, allowing to align 2 spins out of 4 in a direction perpendicular to the field, which limits the Zeeman energy and satisfies the antiferromagnetic exchanges. The obtained phase diagram is also similar to the one derived from magnetization and dielectric constant measurements on this compound <cit.>, with comparable number of phases and consistent phase boundaries in the field-temperature diagram. Details of the measured temperature and field evolution of those different Bragg reflections are gathered in Fig. <ref> and <ref>. Fig. <ref> especially shows Q-scans concatenated to produce maps as a function of wavevector and either temperature or field. Fig <ref> shows the magnetic field dependence of the intensity and/or q-position determined from a fit of those scans.
Evolution at 2K under magnetic field. At 2 K, a contribution appears at q_FM=(1,0,0), where nuclear contribution is forbidden by the average structure Pbam (see Fig <ref>a). As represented in Fig. <ref>c, the intensity of this peak increases and reaches a maximum for H=4 T and decreases progressively until 12 T, where it stabilizes at a minimum value up to 15 T. Fig. <ref>e-h shows that the transition temperature increases from 15 K at 1 T to more than 45 K at 15 T. As it preserves the lattice translation symmetry, it can be interpreted as a ferromagnetic contribution emerging progressively under magnetic field. Owing to the absence of magnetic anisotropy at the Gd site, contrary to the Mn site, and since the coupling between the Gd and Mn moments is the weakest in this compound <cit.>, it is very likely that Gd is the main contributor to this signal. To confirm this interpretation, we performed X-ray Magnetic Circular Dichroism on a powder sample at the ODE beamline, synchrotron SOLEIL, at both Mn K edge and Gd L_3 edge <cit.>. As reported in Fig. <ref>, the measurement at 1.3 T and 6 K exhibits a clear ferromagnetic contribution from the Gd but no sizeable contribution from the Mn moments with experimental sensitivity. By comparing with the literature <cit.>, we were able to estimate the magnitude of the moment to be about 2.5 μ_B for Gd and below 0.05 μ_B for Mn. These results confirm the rise of a ferromagnetic component under magnetic field and further attest that this contribution is due to the Gd moments. This can easily be explained with the same mechanism reported for PrMn_2O_5 <cit.> replacing the exchange interaction values by the one of <cit.>.
Coming back to neutron scattering results at 2 K, Fig. <ref>b and Fig. <ref>e show further that the (commensurate) magnetic signal at q_CM1=(3/2,0,0) decreases rapidly up to 5 T while a very small intensity remains up to H_C=11 T. This q_CM1 intensity coexists at low field with the magnetic phase q_ICM2=(3/2,0,δ=0.425) below 4 K, as can be seen on the maps Fig. <ref>b and <ref>i. The evolution of the intensity and position of q_ICM2 is reported in Fig. <ref>b and Fig. <ref>a respectively. From 0 to 10 T, the intensity grows and the position shifts from δ=0.425 to δ=0.37. Between 0 and 1 T the position remains the same, suggesting a possibility that the 0.425 position may correspond to a commensurate order with q=3/7≈0.428. At H_C=11 T a magnetic transition occurs, the q_ICM2 signal disappears, and another magnetic signal appears at q_CM2=(3/4,0,1/4). This critical value of magnetic field is similar to the observed transition in both magnetization measurements and electric polarization for field along b direction.
Evolution above 33 K under magnetic field. The evolution of the high temperature magnetic order as function of the field was also investigated. At 35 K, Fig. <ref>c shows that the magnetic peak q_ICM1=(3/2,0, ϵ) shifts from ϵ=0.17 to ϵ=0.22 with a decreasing intensity from 0 to 12 T. Interestingly, a commensurate peak appears above H=5 T at q_CM2=(3/2,0,1/4), the very same position as the high field low temperature one, showing that this new magnetic order stabilizes at lower field at high temperature. Performing the same study at 40 K shows no sign of the commensurate phase at q_CM2, while ϵ moves from 0.19 to 0.22 between 0 and 15 T (see Fig. <ref>d).
Temperature evolution of field induced magnetic phases. Despite a strong reduction of the intensity of the ferromagnetic contribution at q_FM above 4 T, the temperature range of existence of such contribution increases from below 10 K at 1 T up to 40 K at 15 T (Fig. <ref>e-h). However, the transition temperature of q_CM1 slowly decreases from 33 K at zero field down to 25 K at 10 T (Fig. <ref>i-j). On the contrary, q_ICM2 shows an increase of its transition temperature from 3.75 to 40 K between 1 and 15 T (Fig. <ref>j-m). Both disappear above the transition field H_C=11 T. At 10 T however, q_CM2 appears in the narrow temperature range between 25 and 40 K. Above 12 T, this phase is present below 35 K down to 2 K (Fig. <ref>k). An additional subtlety appears at 12 T around this q_CM2 magnetic peak. Indeed, two satellite peaks appear at q_ICM3=(1.5,0,0.2) and (1.5,0,0.3) from 2 to 15 K with a maximum at 10 K (Fig. <ref>l). These satellites are still present at 15 T but disappear above 10 K. Further investigations are necessary to better identify the corresponding magnetic structures but we anticipate that these satellites may be the signatures of some kind of roughening of the cycloid around the c-axis or discommensurations.
Hysteretic behaviour. In order to investigate the nature of the transition at H_C=11 T, we performed the same measurement while decreasing the field. Fig. <ref> shows the field evolution of these peaks. A clear hysteresis is visible on the intensity of q_ICM2 (<ref>b) and q_CM2 (<ref>d) with a width around 2 T. This behavior, together with the coexistence of both orders between 10 and 12 T, strongly suggests a first order transition.
§ DISCUSSION
A similar hysteresis in electric polarization as a function of magnetic field was reported on the same compounds <cit.>, with the difference that the magnetic field was oriented 10 degrees off the a direction in the (a,b) plane, with a hysteresis effect on the polarization along b. The authors interpreted this result arguing the presence of 4 topological states, that can be manipulated by the orientation of the applied magnetic field. The authors further relate these 4 states to 4 different spin configurations within the same zero-field unit cell, doubled along the a direction, corresponding to the q_CM1 propagation wave vector. The orientation of the magnetic field in the (a,b) plane seems crucial and the angle between the field and a-axis very specific to obtain such topological transition. The proposed analysis is based on a simple model consisting in two spin chains per unit cell, taking into account the anisotropy and only two exchange interactions (intra-chain along a and inter-chains along b). This simple model is also based on the assumption that the intra-chain exchange interaction is dominant in as observed in most RMn_2O_5.
Although this simplified model allows to introduce the interesting topological aspects of these different ground states, the present results show that the model lacks some ingredients to capture the physics of the high magnetic field phase. First we show here that the topological transition can also be observed for a totally different field direction (here along b), with an effect on the polarization along a. Secondly, inelastic neutron scattering experiments single out as an exception among the RMn_2O_5 family, with a weak intra-chain coupling <cit.> thus not dominant. Finally, the present results show the importance of the coupling along the c axis and call for a 3D model. Indeed, i) the magnetic transition is accompanied by a quadrupling of the unit cell in the c direction (from q_CM1 to q_CM2) and ii) the anomaly at H_c is systematically observed in susceptibility and polarization measurements, whatever the direction of the magnetic field, which indicates a change of magnetic order whatever the direction of the magnetic field.
We propose a simpler reasoning based on DFT calculations of the electronic ground state <cit.> at zero field. According to the symmetry of the room temperature Pm space group, the two components of polarization along a (P_a) and b (P_b) are allowed. This results in 4 degenerate “high temperature” configurations depending on the relative sign of these components : (++), (+-), (-+) and (- -) as represented by the red arrows in Fig. <ref>a. These four configurations are associated with two different spin structures that are equivalent from a symmetry point of view and cannot be discriminated. Interestingly, these two spin configurations correspond to the two topological phases proposed at low field in Ref <cit.>. According to DFT calculations <cit.>, a spin-induced contribution adds up below the magnetic transition temperature (T_N=33 K) : -Δ P_a and +Δ P_b for the (++) and (+-) states, and +Δ P_a and +Δ P_b for the (-+) and (- -) states (see blue arrows in Fig. <ref>a). The net resulting polarization below T_N, summing over the 4 states, is thus along b only, as measured experimentally. Above 12 T, the q_CM2 magnetic state induces a new electronic ground state as revealed experimentally by the changes in the P_a and P_b component of the polarization.
In an analogous way to the proposed switching model between topological states<cit.>, one can expect the magnetic field to select only certain states once it has returned to zero. In order to reproduce the experimental observations, only two possible choices for the final states seem possible : (++) + (+-) or (- -) + (-+) (see Fig. <ref>b).
From a magnetic point of view, the initial state and the final state after field-sweep, are equivalent. This is consistent with the present magnetization and neutron scattering results. From an electronic point of view, however, the final state can be discriminated from the initial state. Indeed, while P_b remains unchanged (see the b component in Fig. <ref>.b), P_a is no longer compensated by the two other contributions (coming from (++) and (+-)). DFT calculations predicted a contribution of Δ P_a ≈ 10 nC.cm^-1 <cit.> which is in perfect agreement with our results in Fig. <ref>. The detailed mechanism to describe how the field along b selects only certain electronic states remains to be unveiled.
§ CONCLUSION
In conclusion, the present study reveals a first order magnetic transition under magnetic field in . This transition recalls metamagnetic transitions characteristic of heavy fermions <cit.>. For H along b, the transition is characterized by a new propagation vector q_CM2=(1/2,0,1/4), which is common among the RMn_2O_5 family. The new high field phase presents an additional contribution to the electric polarization along both a and b directions. Surprisingly, the polarization along a does not go back to zero while decreasing the magnetic field, but remains finite. This indicates a change of the electronic ground state after magnetic field cycling, while no indication for such a change of magnetic ground state could be evidenced neither by magnetic susceptibility nor neutron diffraction. The establishment of a new electronic ground state after H cycling is further observed for all directions of the magnetic field. These results challenge previous model of magneto-electric switching between different topologically protected states. They should motivate further experimental and theoretical work to investigate the stability of these new states for other directions of the magnetic field, and also study the consequence on the dynamical properties such as the electromagnon. Our work confirms the importance of the path followed in the three dimensions (T,E,H) phase diagram in the establishment of the ground state in multiferroic materials as suggested in recent work<cit.>, and should be extended to other external parameters such as pressure.
§ ACKNOWLEDGMENTS
We thank W. Knafo for fruitfull discussions. This study was supported by grants from LLB and SOLEIL synchrotron, and LabEx PALM through Contract No. ANR-10-LABX-0039-PALM. Experiments at ILL were sponsored by the French Neutron Federation (2FDN), with Data References 10.5291/ILL-DATA.CRG-2839 and 10.5291/ILL-DATA.4-01-1700. We acknowledge SOLEIL for provision of synchrotron beamtime (proposal number 20220714) on ODE beamlines and the support of the HLD at HZDR, member of the European Magnetic Field Laboratory (EMFL proposals DMA14-219 and DMA15-219) for magnetization and polarization measurements, and the MORPHEUS platform at the Laboratoire de Physique des Solides for sample alignment.
|
http://arxiv.org/abs/2307.07385v1 | 20230714145224 | Partial Allocations in Budget-Feasible Mechanism Design: Bridging Multiple Levels of Service and Divisible Agents | [
"Georgios Amanatidis",
"Sophie Klumper",
"Evangelos Markakis",
"Guido Schäfer",
"Artem Tsikiridis"
] | cs.GT | [
"cs.GT"
] |
plain
margin=0.99in
compat=1.18
theoremTheorem[section]
lemmaLemma[section]
propositionProposition[section]
corollaryCorollary[section]
exampleExample[section]
definition
definitionDefinition[section]
claimClaim[section]
remarkRemark[section]
⌈⌉
⌊⌋
mechanism[1][htb]
|
http://arxiv.org/abs/2307.04485v1 | 20230710111725 | Silver-Platinum nanoparticles and nanodroplets supported on silica surfaces: structure and chemical ordering | [
"F. Ait Hellal",
"J. Puibasset",
"C. Andreazza-Vignolle",
"P. Andreazza"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.stat-mech"
] |
organization=ICMN, CNRS, Université d'Orléans,
addressline=1b rue de la Férollerie, CS 40059,
city=Orléans,
postcode=45071 cedex 02,
country=France
Stable and metastable metallic nanoparticles exhibit unique properties compared to the bulk, with potentially important applications for catalysis. This is in particular the case for the AgPt alloy that can exhibit the ordered L1_1 structure (alternation of pure Ag and Pt (111) planes) in nanometer size particles. However, for such small systems, the interfaces play an important role. Therefore, the support used to elaborate the nanoparticles in ultrahigh vacuum experiments may influence their properties, even in the case of weakly interacting substrates like amorphous carbon or silica. This work focuses on the AgPt nanoparticles deposited on silica, and investigates the effect of the support disorder and roughness on the structure and chemical ordering, in particular at the interface with the substrate, by Monte Carlo calculations of the atomic density profiles with semi-empiric potentials.
metallic nanoparticle AgPt nanoalloy chemical ordering density profile Monte Carlo simulation
§ INTRODUCTION
Supported metallic nanoparticles (NPs) are catalyst models for structure and chemical ordering studies <cit.>. Choosing an amorphous substrate like amorphous carbon or silicium oxide, as silica, has the advantage of minimizing the interactions with the NP, hence preserving the structure and morphology it would adopt in vacuum in the absence of interactions. This strategy is used in experiments where the NPs are grown on amorphous supports in ultrahigh vacuum conditions <cit.>. It has however been observed that this type of support may still influence the NP structure and morphology <cit.>. Although the expected effects are less spectacular than for crystalline supports like MgO which strongly interact with NPs <cit.>, theoretical works have recently focused on the effect of weak surfaces <cit.>.
Among the possible effects of the substrate, chemical ordering is particularly relevant for catalysis, since NPs offers the unique opportunity to possibly drastically minimize the required amount of active matter by an optimal organization of the chemical species at the NP external surface.
The silver-platinum alloy exhibits interesting features <cit.>, in particular an ordered L1_1 structure (alternation of pure Ag and Pt (111) planes) that has been observed in nanometer size particles <cit.>, and stimulated theoretical studies <cit.>. It has also important applications as catalyst in fuel cells, where chemical ordering strongly influences its catalytic efficiency <cit.>. Therefore, assessing the structure of the supported AgPt nanoalloy is a relevant issue.
The elaboration process strongly influences the structure of the NP which is not necessarily at equilibrium (the system remains trapped in local minima, corresponding to metastable states) <cit.>.
In simulations, the situation is even worse since the limited capabilities of computers impede to reach the experimental timescale, and thus strongly limits the possibilities of atomic reorganization.
To circumvent the problem, it is possible to increase the metal mobility by increasing the temperature <cit.>. This is why, besides the solid NP, we will also consider the corresponding liquid droplet at 1200 K and 1500 K.
Beyond the fact that this forces atomic mobility towards equilibrium, the disappearance of the L1_1 chemical order in the core due to the thermal agitation should leave room for the observation of a possible remnant chemical ordering at the interfaces, in particular that with the silica support.
It is emphasized that, in contrast to simulations, increasing the temperature of Ag-based nanoparticles like AgPt up to the liquid phase is experimentally difficult both in ultrahigh vacuum conditions, due to sublimation before melting, and at ambient pressure, due to the contamination by the atmosphere.
This paper is divided as follows: We first present the numerical model and the methods to characterize the structure and chemical ordering in terms of atomic density profiles. Then the results follow, with a focus on the effect of the temperature on the structure and chemical ordering, as well as the influence of the disorder and roughness of the support.
§ NUMERICAL DETAILS AND METHODS
Molecular model: We perform Monte Carlo (MC) simulation of supported AgPt nanoparticles on various silica substrates that mimic the supports used to elaborate the systems in ultrahigh vacuum experiments. The system is described at the atomic level, with semi-empirical interatomic potentials. The simulations are performed in the canonical ensemble, including random displacements and atomic exchanges between the metallic species Ag and Pt. These exchanges are particularly useful at low temperature where the energy barrier associated to atomic diffusion in the core of the NP is too high to allow chemical reorganization.
Interatomic potentials: The many-body metal-metal interaction derives from the tight binding scheme in the second moment approximation (TBSMA) <cit.>. The ordered AgPt L1_1 structure being stabilized by the contribution of the second neighbors, an additional Gaussian term has been developed by Front and Mottet to reproduce the main structural properties of AgPt alloys <cit.>. They have in particular determined the most stable structures of TOh AgPt NPs smaller than 7 nm. In this study we will focus on the NP with 1289 atoms (3.4 nm) which exhibits a rich structure depicted in Fig. <ref>.
The metal-support interaction being weak (van der Waals like), a simple Lennard-Jones potential has been used, that has been previously developed for pure silver and platinum NPs as well as AgPt nanoalloys <cit.>. This potential has been parameterized to reproduce the aspect ratio (defined as H/D where H is the height and D the diameter) of experimentally deposited NPs (the parameters are given in Table <ref>).
Supports: We consider two silica supports to evidence the effect of the atomic disorder and roughness that can be observed in experimental supports like oxidized silicon wafers. As a perfectly flat and ordered surface we use the (100) quartz surface. It is emphasized that this surface undergoes a 1× 2 reconstruction with a top layer twice as dense as the bulk quartz (see Fig. <ref>a) <cit.>. To model a disordered substrate we simply cut and relax a slab of amorphous silica (a-SiO_2) (see Fig. <ref>b). More details in the method and the potentials used can be found in Ngandjong et al. <cit.>. It is mentioned that, despite the fact that the amorphous silica surface is hydroxylated, the hydrogen species are not explicitly taken into account in the interactions.
Chemical ordering and density profiles: The objective is to measure the effect of the substrate on the chemical ordering in the nanoalloy. This is done simply by comparing layer by layer with the equilibrium structure of the free NP (Fig. <ref>) since the geometric structure is marginally affected by the weak interaction with the substrate. However, the time scale involved in experiments being inaccessible to simulations, the intrinsic mobility of Ag (due to its lower cohesion) is largely underestimated in the calculations. The introduction of MC exchanges between Ag and Pt greatly solves the problem and allows chemical rearrangement, but possibly misses some facets of the complex cross-diffusion of Ag and Pt in the NP. We therefore consider the effect of temperature, ranging from the solid up to the liquid state (above approximately 1200K for Ag_3Pt) <cit.>. In this case, the NP structure is fully disordered (droplet) but may exhibit partial chemical ordering, in particular close to the interfaces (with vacuum and substrate).
In the liquid case, the structure of the nanodroplet is characterized by averaging the atomic density profiles of the metal. Since our objective is to determine the aspect ratio (H/D) we focus on two density profiles, along the z axis (perpendicular to the surface) and along the radial coordinate r_cyl (in the plane parallel to the substrate, see Fig. <ref>). So, from the local atomic density ρ(r_cyl,θ,z) we construct the two quantities:
ξ_r (z) = ∫ρ(r_cyl,θ,z) r_cyl dr_cyl dθ
ξ_z (r_cyl) = ∫ρ(r_cyl,θ,z) r_cyl dz dθ.
These density profiles have the dimension of the inverse of a distance, and their integrals give the total number of atoms in the NP.
In order to determine the aspect ratio from the data, these profiles are fitted on a simple model of a truncated sphere of uniform density ρ_0 of radius R_1 with a skin of thickness R_2-R_1 where the density decreases from ρ_0 down to zero. We define the droplet radius as R=0.5(R_1+R_2) and the aspect ratio is H/D=(h+R)/2R (see Fig. <ref>). In practice, ξ_z (r_cyl) mostly depends on the droplet radius, while ξ_r (z) is sensitive to h.
§ RESULTS
§.§ Structure and chemical ordering of solid AgPt NPs on quartz
At low temperature (around room T or below), the supported AgPt NP remains essentially frozen in its initial state, preserving its highly structured layers and chemical ordering (in particular the L1_1 structure) with only scarce exchanges between Ag and Pt species. Increasing the temperature enhances the atomic mobility and somehow mimics the experimental conditions where moderate annealing allows the AgPt NPs to reorganize thanks to the large mobility of Ag atoms. The optimal temperature is around 700 K, the largest possible below the melting point of Ag for a NP of few nanometers <cit.>. We first consider the AgPt NP deposited on the perfectly ordered quartz surface. The atomic density profiles along the z axis are acquired for Ag and Pt and shown in Fig. <ref>.
One observes a strong layering through the whole NP showing that at 700 K the NP remains solid during the simulation run. One can however observe some atomic mobility at the external surface of the NP, as revealed by the small peak C in Fig. 4 corresponding to adatoms on the top layer. An example where such adatoms can be seen is given in Fig. <ref>.
Despite the low atomic mobility, the MC chemical exchanges between Ag and Pt species allow the system to reorganize. It is observed that the chemical ordering associated to the alternating Ag and Pt planes in the core of the NP is lost, showing that the L1_1 structure is destabilized by the temperature.
On the other hand, the outermost silver layer is quite robust, as can be seen on the snapshot (Fig. <ref>) and the presence of essentially pure Ag peaks in the first and last layers denoted A and B in Fig. 4. Note however that these layers are not perfectly pure Ag anymore, revealing that some Pt atoms can diffuse in the outer Ag shell. But, as can be seen in the inset, the peaks associated to Pt in these layers are not centred with respect to the corresponding Ag peaks. In each case the Pt peak is shifted towards the centre of the NP, meaning a strong penalty for Pt at the surface. Quantitatively, the integration of Pt peaks gives their proportion in the A and B layers. One gets for A 3.7% Pt and for B 5.5% Pt. The free Ag surface is slightly more favourable for Pt compared to the surface in contact with the support. This is an interesting feature at odds with the observation that the Pt-SiO_2 interaction is stronger than the Ag-SiO_2 interaction (Table <ref>). A possible interpretation is that the Ag layer at the interface with silica is more constrained than the free one.
§.§ Structure and chemical ordering of AgPt nanodroplets on quartz
What happens above the melting point of AgPt? The structure of the NP is now expected to be destabilized (and at equilibrium thanks to the atomic mobility), which could influence the chemical ordering. The double objective is thus to determine the aspect ratio of the AgPt droplet and the chemical profiles at the interfaces. The first calculations are done well above the melting point, at T = 1500 K, for the AgPt NP deposited on the quartz surface.
We first calculate the density profiles ξ_r (z) and ξ_z (r_cyl) for all atoms in the drop without distinguishing Ag and Pt (Fig. <ref>). As can be seen on ξ_r (z), the layer structure along the z axis is smoothed out due to the thermal agitation, except close to the perfectly flat and ordered quartz surface, where one can observe at least three layers. In the upper region of the droplet the layering has completely disappeared and the density profile decreases smoothly with a small tail at z = 20 Å.
Along the radial coordinate, ξ_z (r_cyl) shows an initial increase essentially linear corresponding to the integration of a uniform density on the surface of a cylinder. It then reaches a maximum and rapidly decreases due to the spherical shape of the drop, with a tail due to the smooth transition between the metal core and the surrounding vacuum.
Examination of the density profiles gives a good estimate of the drop height and diameter, but a best fit with the liquid drop model depicted in Fig. <ref>b gives a better insight (smooth solid red lines in Fig. <ref>). Obviously, the layering observed in ξ_r (z) cannot be described by the uniform density model, but the average variations are caught. However, this reveals that the first layers are particularly structured because of the height of the maxima compared to the drop model curve. This is essentially because of the perfectly ordered quartz surface. Otherwise, the model describes quantitatively the variations of ξ_r (z) far from the support, as well as the variations of ξ_z (r_cyl) in the whole range of values. The corresponding density profile of the liquid drop model is shown in the inset, and the values given by the best fit are h = 13 Å and R = 19.5 Å, giving the aspect ratio H/D = 0.83. The uniform density is ρ_0 = 0.049 atom/Å^3.
The density profile in the inset shows that the thickness of the skin (5 Å) somehow corresponds to 1 to 2 atomic diameters and is not negligible compared to the radius of the droplet. A more refined profile can be acquired directly during the course of the simulation by calculating a spherical radial distribution ρ(r_sph) taking care to exclude the rays in the solid angle defined by the intersection between the sphere and the substrate. The result is given in Fig. <ref>. It confirms that the density within the core of the droplet is essentially uniform within uncertainties, due to the low statistics in the centre of the sphere. The average density on the plateau is in agreement with the value previously extracted from the best fit. It also confirms that the atomic density profile drops smoothly to zero within a skin thickness of 5 Å.
The same analysis has been done at a lower temperature T = 1200 K (see Fig. <ref> for ξ_r (z) and ξ_z (r_cyl) and Fig. <ref> for ρ(r_sph)). As can be seen, reducing the temperature enhances the observed layering at the quartz surface. Otherwise, the structure is not affected significantly except for a slightly larger density in the core: ρ_0 = 0.051 atom/Å^3. The extracted values from the fit are h = 13 Å and R = 19.7 Å, giving an aspect ratio H/D = 0.83.
In order to quantify the chemical ordering at the interface due to the support we focus only on the case of the AgPt droplet at 1200 K on quartz: the low temperature and the strong interaction with the support is expected to enhance the effect. Figure <ref> shows that at midheight (around z=0) the Ag and Pt species have equal probabilities to be at any position. However, close to the substrate, one observes a strong chemical ordering with an almost pure Ag layer at the interface with the quartz, the second layer being filled essentially with Pt, the silver atoms being at the periphery. On the other side (top of the NP), we also observe chemical ordering (silver excess at the interface with vacuum). All this is explained by the low surface tension of Ag which preferentially migrates to the interfaces, while Pt accumulates in subsurface, in particular at the interface with the support, a behaviour similar to what was observed in the solid state.
§.§ Effect of the support disorder and roughness on the liquid NPs
Does the strong layering observed on the quartz persist in presence of disorder or roughness? To answer this question, we performed MC simulations of the AgPt NP on the amorphous SiO_2 surface (Fig. <ref>b) at T = 1500 K. The density profiles ξ_r (z) and ξ_z (r_cyl) exhibit essentially the same characteristics as for the quartz support, except for two points (see Fig. <ref>)
(i) The layering close to the support is significantly smoothed out due to the atomic disorder of the amorphous silica surface. Note that the roughness of this surface is quite small, but also participates to the destabilization of the first layer. The consequence is that the density profile now closely approaches the smooth curve given by the liquid drop model.
(ii) The best fit with the model gives the following parameters: h = 14.5 Å and R = 19.5 Å, giving an aspect ratio H/D = 0.87. As can be seen, compared to quartz, the aspect ratio is slightly closer to 1, in agreement with the fact that the surface density of the a-SiO_2 is lower than that of the quartz which exhibits a densification due to the reconstruction. Otherwise, the density in the core of the drop is ρ_0 = 0.049 atom/Å^3, a value identical to that observed on the quartz.
Reducing the temperature favors layering along the z axis (see Fig. <ref>). It is however much less pronounced than on quartz. The main difference is that here the layering has a uniform amplitude from the first layer in contact with the substrate up to the center of the NP, while on quartz it was highly increasing in the vicinity of the surface. This suggests that in the case of the quartz support, the surface clearly imposes a strong ordering, while, on the a-SiO_2 support, the layering is essentially intrinsic to the metal structure although it is of course initiated and stabilized by the surface.
The best fit with the drop model gives the following parameters: h = 14.5 Å and R = 19.5 Å, giving an aspect ratio H/D = 0.87, and ρ_0 = 0.051 atom/Å^3, a value identical to that on the quartz at the same temperature.
§ CONCLUSION
The structure and chemical ordering in AgPt NPs of 3.4 nm (1289 atoms) deposited on silica supports have been investigated by Monte Carlo simulations. The introduction of chemical exchanges between the Ag and Pt species allows to converge towards the chemical ordering at equilibrium even for NPs trapped in a metastable structure in terms of atomic positions. It is observed that silver preferentially migrates to the outer surface and at the interface with the substrate, preserving the same stable structure as for the free NP. Increasing the temperature to 700 K allows partial atomic mobility without melting the NP. One observes Ag adatoms on the external surface and close to the substrate, and a small diffusion of Pt atoms from the subsurface layer to the surface layer.
It is however observed that the Pt atoms at the periphery always remain slightly embedded in the outermost Ag layer, which is of importance for catalysis. Since the local structure can be strongly influenced by the surrounding atmosphere, further studies are necessary to quantify the catalytic efficiency of this system in real conditions. Systems exhibiting similar structures or chemical ordering illustrate this point <cit.>.
Higher temperatures have been considered, in the liquid state. The layer structure is expected to disappear, but the presence of the support is able to stabilize the first layers. This is particularly visible for the ordered quartz surface, but less pronounced for the disordered amorphous silica. The determination of the aspect ratio shows that the morphology of the supported drop follows the expected behaviour, with a lower aspect ratio on the more attractive quartz surface. Regarding chemical ordering, the external surface at the interface with the substrate is mostly composed of Ag species, with statistically few Pt atoms. Here again, examination of density profiles shows that the Pt atoms always remain slightly embedded in the outermost Ag layer.
§ DECLARATION OF INTERESTS
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGEMENTS
F. A. H. acknowledges a grant from the education and research ministry for her Ph.D. The authors would like to acknowledge support from the International Research Network - IRN “Nanoalloys” of CNRS.
elsarticle-num
|
http://arxiv.org/abs/2307.06816v1 | 20230710024453 | Data-driven Nonlinear Parametric Model Order Reduction Framework using Deep Hierarchical Variational Autoencoder | [
"SiHun Lee",
"Sangmin Lee",
"Kijoo Jang",
"Haeseong Cho",
"SangJoon Shin"
] | cs.LG | [
"cs.LG",
"physics.data-an",
"physics.flu-dyn"
] |
Data-driven Nonlinear pROM using Deep Hierarchical VAE …]Data-driven Nonlinear Parametric Model Order Reduction Framework using Deep Hierarchical Variational Autoencoder
1]SiHun [email protected]
1]Sangmin [email protected]
1]Kijoo [email protected]
2]Haeseong [email protected]
[1,3]SangJoon [email protected]
[1]Department of Aerospace Engineering, Seoul National University, Seoul, 08226, Republic of Korea
[2]Department of Aerospace Engineering, Jeonbuk National University, Jeonju, 54896, Republic of Korea
*[3]Institute of Advanced Aerospace Technology, Seoul National University, Seoul, 08226, Republic of Korea
A data-driven parametric model order reduction (MOR) method using a deep artificial neural network is proposed. The present network, which is the least-squares hierarchical variational autoencoder (LSH-VAE), is capable of performing nonlinear MOR for the parametric interpolation of a nonlinear dynamic system with a significant number of degrees of freedom. LSH-VAE exploits two major changes to the existing networks: a hierarchical deep structure and a hybrid weighted, probabilistic loss function. The enhancements result in a significantly improved accuracy and stability compared against the conventional nonlinear MOR methods, autoencoder, and variational autoencoder. Upon LSH-VAE, a parametric MOR framework is presented based on the spherically linear interpolation of the latent manifold. The present framework is validated and evaluated on three nonlinear and multiphysics dynamic systems. First, the present framework is evaluated on the fluid-structure interaction benchmark problem to assess its efficiency and accuracy. Then, a highly nonlinear aeroelastic phenomenon, limit cycle oscillation, is analyzed. Finally, the present framework is applied to a three-dimensional fluid flow to demonstrate its capability of efficiently analyzing a significantly large number of degrees of freedom. The performance of LSH-VAE is emphasized by comparing its results against that of the widely used nonlinear MOR methods, convolutional autoencoder, and β-VAE. The present framework exhibits a significantly enhanced accuracy to the conventional methods while still exhibiting a large speed-up factor.
[
*
=====
§ INTRODUCTION
Modern high-fidelity, nonlinear computational analysis is mostly computationally intensive in terms of time and memory. In particular, many multiphysics analysis adopt a partitioned method in which the solvers regarding each type of physics are executed separately. Such an approach also requires computation for the data interpolation among different types of discretization and executes iterative computation within a single time step, demanding even more intensive computation. Consequently, model order reduction (MOR) has been suggested to alleviate the computational time and memory consumption. Two types of MOR frameworks exist: intrusive and non-intrusive. Intrusive MOR depends on the governing equation to construct the reduced bases. Galerkin projection is one of the most widely used approaches which projects an ensemble of the full-order model (FOM) results into the governing equation <cit.>. However, a parametric analysis may become extremely challenging when the algorithm is not explicitly established as it manipulates the governing equation directly <cit.>. Instead, a completely data-driven approach, non-intrusive MOR (NIMOR) may be considered. NIMOR aims to discover the embedded pattern in the FOM dataset and rescale those to a much smaller dimensionality. Unlike intrusive MOR, NIMOR is independent of the governing equation, making it to be extremely versatile.
Among MOR methods, linear subspace MOR (LS-MOR) has been widely considered as they are mathematically rigorous and efficient. LS-MOR has been successfully employed in fluid dynamics, flow control, structural dynamics, aeroelasticity, and fluid-structure interaction (FSI) <cit.>. However, LS-MOR may require an excessive number of the subspaces to accurately represent a nonlinear, complex FOM. For example, in complex turbulent fluid flows, proper orthogonal decomposition (POD) extracts its modes with respect to the energy ratio and details are filtered out <cit.>. Those details are usually excluded because they contain very small energy and the corresponding coefficients are quite random. LS-MOR methods are generally known to be less effective on advection-dominated, sharp-gradient, multiphysics systems, and especially systems with slowly decaying Kolmogorov n-width <cit.>.
Recent exponential development in the field of machine learning has enabled neural networks to be used for MOR. Specifically, autoencoder has become a viable nonlinear MOR method where a shallow, well-trained autoencoder with a linear activation function is known to behave similarly to POD <cit.>. Instead of the linear activation functions, many autoencoders adopt nonlinear activation functions, using them to generate nonlinear subspace <cit.>. Such an autoencoder-based method has been implemented widely to reduce the dimensionality of various engineering problems including fluid dynamics, convection problems, and structural dynamics <cit.>. However, the performance of an autoencoder as a generative ANN is known to be quite limited <cit.>. The deterministic aspect of its loss function, which was designed to only reconstruct the input, limits autoencoders to generate diverse outputs. Attempts to enhance the generative capability have led to the development of the variational autoencoder (VAE) and generative adversarial network (GAN) <cit.>. These methods implement probabilistic loss functions that construct a dense and smooth latent space. Between the two alternatives, VAE is selected for use in this study owing to its stable training property <cit.>. VAE has been widely studied for use in the field of computer vision but it has also been used to interpolate dynamic systems <cit.>.
VAE in its simplest form, vanilla VAE, is capable of generating data of significantly superior quality compared with the autoencoder. However, VAE commonly suffers from a phenomenon known as posterior collapse, where the generative model learns to ignore a subset of the latent variables <cit.>. The posterior collapse was easily alleviated by applying a technique known as Kullback-Leibler divergence (KL divergence) annealing, or β-VAE <cit.>. Another problem with vanilla VAE is that it is restricted to a shallow network, limiting its expressiveness. Vanilla VAE tends to perform worse as the network becomes deeper due to the loss of long-range correlation and its performance was found to be insufficient when complex data were processed <cit.>. Deep hierarchical VAEs, such as the LVAE, IAF-VAE, and NVAE, have been developed to enhance the performance of vanilla VAE <cit.>. These VAEs mainly adopt a type of residual cells that connect the encoder and decoder directly without passing through the latent space. Similar to U-nets, the skip connections allow bidirectional information sharing between the encoder and decoder, thereby preventing the loss of long-range correlation.
Recently, various types of VAEs are being adopted as a nonlinear MOR method owing to their superior generative capability compared to conventional autoencoders. VAEs have been adopted on flow problems <cit.>, transonic flow <cit.>, numerics <cit.>, biology <cit.>, brain MRI images <cit.>, and anomaly detection <cit.>. While earlier studies adopt the simplest convolutional VAE, many recent studies consider β-VAE due to its near-orthogonal latent space <cit.>. Previous studies show that β-VAE may successfully construct nonlinear subspace, but the majority of networks used in those studies were quite shallow. The use of shallow networks may result in insufficient expressiveness if the input data consists of a large number of DOF and exhibits a complex response.
Instead, a deep hierarchical VAE is proposed, least-squares hierarchical VAE (LSH-VAE) for nonlinear MOR of a dynamic system. LSH-VAE is a very deep hierarchical network that incorporates a modified loss function similar to that of β-VAE. The deep hierarchical structure enables a very deep, stable network (>100 layers) with highly expressive and accurate interpolation results. The modified loss function consists of a hybrid weighted least-squares and Kullback-Leibler divergence function that alleviates posterior collapse and enhances orthogonality of the latent space <cit.>. The least-squares error in the loss function is also known to enhance the accuracy when used on the continuous dataset <cit.>.
There has been no report on a very deep VAE (>100 layers) implemented for nonlinear MOR. The present framework is validated by solving the following three problems. First, a standard two-dimensional FSI benchmark problem developed by Turek and Hron will be exemplified <cit.>. Then, the highly nonlinear aeroelastic phenomenon of limit cycle oscillation (LCO) will be considered to examine the accuracy of the proposed framework under nonlinearity. Finally, the flow surrounding a three-dimensional cylinder is to be analyzed to establish the capability of the current framework to accommodate a system with a significantly large number of degrees of freedom. The computational efficiency and accuracy will be assessed as well as comparison to the existing nonlinear MOR methods will be presented.
§ MACHINE-LEARNING METHODS
This section provides the theoretical background of the machine learning methods. Based on the existing convolutional autoencoder and β-VAE, the formulation of the proposed network, LSH-VAE is presented.
§.§ Convolutional autoencoder (CAE)
A convolutional autoencoder (CAE) is an ANN that is trained to output data that are similar to its input. The typical architecture of the CAE, shown in Fig. <ref>, enables the encoder to compress the input data into a smaller latent dimensionality. The decoder then expands the latent code back to its original dimensionality. By training both the encoder and decoder, CAE learns to extract important features of the input dataset. The latent codes contain the embedded features recognized by the CAE that can be used as the reduced bases in the ROM.
The interpolation of data using CAE is conducted by interpolating the latent codes. The interpolated latent code contains the interpolated features, which leads to the interpolation of the input data.
The loss function of CAE is quite intuitive. CAE takes the input, x, and passes it through the encoder, Φ, to obtain the latent vector, z. Then, the decoder, Ψ, receives the latent vector and generates the output, y. The output, y, is compared against the input, x, using the mean squared error (MSE) loss function. In this way, the CAE is trained such that the difference between y and x is reduced, aiming for a more accurate reconstruction of the input. The equations for the encoder and decoder network are presented in Eq. (<ref>), where the loss function is shown in Eq. (<ref>).
z=Φ(x), y = Ψ(z)
L = MSE(Ψ(Φ(x))-x)
The simplest form of CAE, known as the vanilla CAE, has been shown to produce unsatisfactory interpolation outcomes <cit.>. Hence, derivatives thereof such as VAE, and GAN may be utilized to enhance the performance.
§.§ Variational autoencoder (VAE)
VAE and autoencoder share a similar architecture. The largest difference lies in that the encoder of VAE utilizes probabilistic latent values instead of discrete latent codes. The probabilistic encoder models the latent feature probability distribution. The resultant latent space is continuous and smooth, enabling higher quality generated outcomes. The encoder of VAE extracts the mean, μ, and the variance, σ, which are used to generate the latent code, z. A typical VAE structure can be observed in Figure <ref>.
VAE aims to efficiently infer the intractable posterior distribution, p(z | x). It is performed by adopting an approximate posterior, q(z | x), because determining the true posterior is quite challenging. Here, the encoder or inference network is represented by q(z | x), whereas the decoder network is denoted as p(x | z).
Kullback-Leibler (KL) divergence is the expectation of the difference between two distributions, which is always a positive value. KL divergence between the approximate and the real posterior is written as Eq. (<ref>).
D_KL(q(z | x) || p(z | x))=-∫ q(z | x)log(p(z | x)/q(z | x))dz≥ 0
Applying Bayes' theorem to Eq. (<ref>) yields Eq. (<ref>).
D_KL(q(z | x) || p(z | x)) = -∫ q(z | x) log(p(x | z)p(z)/q(z | x)p(x)) dz
= -∫ q(z | x) log(p(x | z)p(z)/q(z | x)) dz + log p(x)≥ 0
Equation (<ref>) can be rewritten as Eq. (<ref>). Applying the rules of logarithm to Eq. (<ref>) will yield Eq. (<ref>).
log p(x) ≥∫ q(z | x)logp(x | z)p(z)/q(z | x)dz
log p(x)
≥∫ q(z | x) log(p(z)/q(z | x))dz + ∫ q(z | x)log p(x | z) dz
≥𝔼_q(z | x)[log p(x | z)]-D_KL(q(z | x) || p(z))
The right hand side of Eq. (<ref>) is the evidence lower bound (ELBO). VAE aims to maximize ELBO which maximizes the logarithmic probability of the data by proxy. Following the convention of minimizing the loss function, the right hand side of Eq. (<ref>) is converted as Eq. (<ref>), which is the goal of VAE.
min[ -𝔼_q(z | x)[log p(x | z)]+ D_KL(q(z | x) || p(z)) ]
The goal of VAE is to minimize both the reconstruction and KL divergence loss. In Eq. (<ref>), the first term corresponds to the reconstruction loss and the second term corresponds to KL divergence loss. KL divergence loss enforces the decoder (approximate posterior) to become similar to the inverse of the encoder.
The loss function in Eq. (<ref>) has to be differentiable to minimize it during the training. Usually, KLD term can be integrated analytically <cit.>; however, the reconstruction loss is not directly differentiable. To enforce the reconstruction loss to be differentiable, the reparameterization technique is adopted <cit.>.
First, Gaussian sampled random noise, ε will be introduced. The latent code z, is formulated as shown in Eq. (<ref>), introducing the mean and standard deviation to the equation.
z=μ+(σ×ε), ε∼ N(0,1)
Since the latent code is formulated as Eq. (<ref>), KL divergence in Eq. (<ref>) is rewritten as Eq. (<ref>), assuming the posterior and prior follow the Gaussian distribution.
D_KL(q(z| x)|| p(z)) = 1/2∑(σ^2+μ^2-(log(σ^2)+1))
The latent code with the reparameterization technique enforces the latent space to be stochastically determined. The reparameterization enables the reconstruction loss to be differentiable by Monte Carlo method. For further details and step-by-step derivation of the VAE loss function, reference can be found in works by Kingma and Odaibo <cit.>.
§.§ Least-squares hierarchical variational autoencoder (LSH-VAE)
Conventional vanilla VAE is limited to shallow networks due to vanishing gradients and the loss of long-range correlation. However, shallow networks may lack expressiveness on complex systems with a significant number of DOFs. In this study, a deep VAE with a hierarchical structure is proposed to enhance the performance. Specifically, to alleviate the loss of long-range correlation and stabilize the training process of a very deep network. The hierarchical structure creates direct passages between the earlier layers of the encoder and the latter layers of the decoder, circumventing the middle layers. Those direct passages enable bidirectional information sharing between the encoder and decoder network. The bidirectional information enables the earlier layers of the VAE to greatly affect the outcome, thus, alleviating the loss of long-range correlation. The diagram in Fig. <ref> shows the hierarchical structure of LSH-VAE.
In the hierarchical VAE, the latent variables are divided into L groups. By the divided latent dimension, the prior and posterior distributions are rewritten as in Eq. (<ref>) and Eq. (<ref>).
p(z)=p(z_L) ∏_i=1^L-1 p(z_i| z_i+1)
q(z | x)=q(z_1| x) ∏_i=2^L q(z_i| z_i-1)
p(z_i| z_i+1)=𝒩(z_i|μ(z_i+1), σ^2(z_i+1))
p(z_L)=𝒩(z_L| 0, I)
q(z_i| z_i-1)=𝒩(z_i|μ(z_i-1), σ^2(z_i-1))
q(z_1| x)=𝒩(z_1|μ(x), σ^2(x))
The loss function for hierarchical VAE is shown in Eq. (<ref>), which is obtained by computing the KL divergence separately for each group. By breaking down the KL divergence into groups, bidirectional information flows are created between the inference and generative network. Detailed descriptions about the deep hierarchical structure of VAE can be found in <cit.>.
min [ -𝔼_q(z | x)[log p(x | z)]+ D_K L(q(z | x) | p(z))
+∑_i=1^L-1𝔼_q(z_<i| x)[D_K L(q(z_i| z_<i, x) | p(z_i| z_>i))]]
The present LSH-VAE adopts hierarchical structures motivated by LVAE, IAF-VAE, and NVAE <cit.>. The latent codes in the hierarchical VAE are formed by both bottom-up and top-down information. The latent codes of each of the groups output shared information (from the encoder and decoder) to the next decoder block. Because the information of the encoder and decoder network is shared via latent code, the network delivers higher performance.
Upon the hierarchical structure, LSH-VAE implements a hybrid weighted loss function. The loss function consists of the mean squared error (MSE) and KL divergence instead of conventional binary cross entropy. The use of MSE as a reconstruction error has been known to be successful for continuous datasets <cit.>. The loss function of LSH-VAE is shown in Eq. (<ref>), where the coefficients α and β denote the weights of the MSE and KL divergence, respectively.
min _ϕ, θ [α MSE(x, x̃)+ β D_K L(q(z | x) | p(z))
+∑_i=1^L-1𝔼_q(z_<i| x)[β D_K L(q(z_i| z_<i, x) | p(z_i| z_>i))]]
Usually, the weights α and β are set to be α / β_target≈ 10^6. During the training, α is a fixed value whereas β is a variable that varies with respect to the epochs. The variable β is implemented to prevent posterior collapse in which some latent variables become inactive. This method is known as KL-annealing or β-VAE, where β is formulated as Eq. (<ref>) <cit.>.
β =
1× 10^-4β_target if epoch <0.3n_epochs
β_targetepoch/n_epochs if epoch >0.3n_epochs
During the training, β is assigned a low value at the start such that LSH-VAE behaves as an autoencoder. During the first few epochs, input data will be mapped on the latent space. Beyond a few prescribed epochs, β will be gradually ramped up such that LSH-VAE may behave as a VAE, generating smooth latent space.
§ PRESENT FRAMEWORK
§.§ Architecture of the least-squares hierarchical VAE (LSH-VAE)
LSH-VAE adopts a one-dimensional (1D) convolutional layer to accommodate the transient response of the unstructured grids. The use of a 1D convolutional layer enables the temporal continuity of the physical variables to be considered.
The encoder and decoder of the LSH-VAE consist of the blocks discussed in the previous section, where a detailed schematic of these blocks is shown in Fig. <ref>.
Being a deep neural network (DNN), LSH-VAE encoder and decoder blocks are composed of stacks of multiple layers. These layers consist of the following layers: spectral normalization (SN), 1D convolution, dense, exponential linear unit (ELU), Swish, and batch normalization (BN). Swish, and ELU nonlinear activation functions are chosen as their continuous derivatives enhance the stability of a DNN <cit.>. The LSH-VAE implements a normalization-activation sequence instead of the conventional activation-normalization sequence. Such sequence is known to deliver benign performance empirically when used before the convolutional computation <cit.>. The output of the encoder block is branched in three ways. The first branch connects to the input of the next block and the remaining two branches form μ, and σ. The encoder latent code is formulated by reparameterizing μ, and σ.
The reparameterized latent code and ELU layer infer bottom-up information transfer, shown in green in Fig. <ref>.
In the current configuration, the decoder network is significantly deeper and more complex than the encoder network. The deep decoder network enables an expressive output when accompanied by a system with many DOFs. The decoder network receives two inputs: top-down information from the predecessor decoder block and encoder-decoder shared information from the latent code. Through a series of layers, the decoder outputs top-down information, shown in blue. The decoder block generates the decoder latent code and input for the next block. The encoder latent code and the decoder latent code are added to generate shared latent code, z^i. The shared latent code contains both top-down and bottom-up information, enabling bidirectional information sharing.
§.§ Preprocessing dataset
Acquiring many FOM samples may be quite cumbersome. In particular, many-queried FOM computations are extremely time-consuming if FOM is highly nonlinear, includes multiphysics, and involves a significant number of DOFs. Acquiring those FOM data through experiments and simulations is considered prohibitive for computational, financial reasons. Instead, data augmentation is considered to sample sparsely and expand the amount of training data. A larger amount of training data improves the generalization of ANN and thus enhances the accuracy. Similar to the data augmentation typically performed on images, the pre-acquired FOM results are processed using the following three methods. First, temporal data are resampled by shortening the timestep, i.e. frequency elongation. Then, the training data are augmented by changing the amplitude and adding a random number within the bound of ±30% for every epoch. Training the ANN using the augmented data ensures that the ANN is effectively trained against a very large dataset, resulting in a high-performance network.
§.§ LSH-VAE training and interpolation
The current framework performs MOR directly on FOM results. The LSH-VAE employs 1D convolutional layers which requires a three-dimensional input of the format (batch, sequence, channel).
In the current configuration, the temporal continuity of the FOM results is considered in the convolutional dimension. The resultant input composition of LSH-VAE becomes (batch, N_t, N_DOF), where N_t denotes the number of time steps and N_DOF denotes the number of DOFs in the dynamic system. LSH-VAE receives such input and compresses it into latent vectors via the encoder. The dimensionality change throughout LSH-VAE is expressed in Eq. <ref>, where N_i represents the latent dimension in the i-th latent group. The total latent dimension, ∑ N_i is much smaller than the FOM dimension, achieving MOR.
(, N_t, N_DOF)(, ∑ N_i)
(, N_t, N_DOF)
The training algorithm for LSH-VAE is shown in Algorithm <ref>. The algorithm starts by normalizing the physical variables of interest, v. v is normalized to the range of [-0.7, 0.7] for each DOF by the normalizing function, N(). The normalized variable is then augmented by resampling for N_A instances. Then, the training dataset, x_train is constructed by concatenating the original normalized variable with the augmented ones. The training dataset of the network becomes, x_train = [x,R(x)_1,R(x)_2, ... ,R(x)_N_A], where R(x)_n denotes the resampled normalized variable of interest.
The training dataset is further augmented for amplitude and offset. The amplitude and offset augmentation is performed by using random values for every epoch. The network receives a different input in every epoch, enabling the network to be trained against a very large dataset. After the data augmentation is completed, the encoder and the decoder networks are trained. After the decoder is trained, the loss function can be obtained by Eq. <ref>. The training of LSH-VAE is optimized by the Adamax optimizer, which has shown good performance compared with the conventional Adam and SGD optimizers.
Generative ANNs usually require latent vectors to be sought. This is required owing to the probabilistic formulation that is used to parameterize the latent vector. However, we empirically found that sufficient epochs and a small number of parameters obviate the need for latent searching. In this study, rather than attempting latent searching, the latent vectors are calculated by the mean value from the encoder network directly.
Upon acquiring the latent vectors, slerp interpolation is performed to collect the targeted latent vector. The latent space created by VAEs is in the form of a well-structured, multi-dimensional hypersphere, which enables complex operation by vector arithmetic <cit.>. It is possible since the reparameterization trick introduces Gaussian random number, which attributes to the vector length and angle in the latent hypersphere. The slerp interpolation shown in Algorithm <ref> not only interpolates the rotation angle of vectors, but it also interpolates the arc length. Such slerp interpolation enables the latent vectors to be interpolated following the path of the complex latent manifold. The use of slerp interpolation has been widely accepted for performing latent interpolation <cit.>.
§ NUMERICAL RESULTS
This section presents the numerical results obtained by the proposed framework. First, the framework is applied to solve a FSI benchmark problem previously developed by Turek and Hron <cit.>. The accuracy of the current method is evaluated and compared against that obtained by the conventional nonlinear MOR, CAE. Then, the proposed framework is examined on a wing section that undergoes limit cycle oscillation (LCO). LCO analysis is performed to evaluate the accuracy of the proposed framework on the nonlinear multiphysics phenomenon. Last, the applicability of LSH-VAE to a system with many DOFs is demonstrated by analyzing a three-dimensional fluid flow.
The numerical results presented in this paper are obtained by intentionally sampling a small number of initial FOM results. Sparse sampling is performed because ANN replicating its training data often leads to enough accuracy when the sampling is performed densely. In addition, sparse sampling is attempted as dense and iterative computations on a nonlinear system with many DOFs are rather unrealistic.
For all of the results, the same LSH-VAE network is used for each variable of interest. The hyperparameters used for the training are shown in Table. <ref>. In Table <ref>, the first value for the latent dimension criterion denotes the latent dimension in which the interpolation is performed. The latter value denotes the latent dimension used for information sharing between the encoder and decoder network. LSH-VAE used for the following numerical results consists of 7 encoder and decoder blocks, with a total of 107 layers. While detailed optimization of the hyperparameters would yield better accuracy, such procedure is not performed to emphasize the generability of the framework. However, different batch sizes are used considering the number of DOF, limited by the VRAM of GPU.
For all of the results presented in this paper, computations are carried out on AMD 3950X CPU to obtain the FOM results. ANN are trained using NVIDIA GeForce GTX 3090 GPU.
§.§ Turek-Hron FSI benchmark
§.§.§ Description of the analysis
The widely accepted FSI benchmark developed by Turek and Hron is described in this section <cit.>. The benchmark problem consists of a rigid cylinder with a diameter of 0.1 m and a highly flexible tail. The fluid flowed from the inlet to the outlet with laminar separation occurring behind the cylinder. Von Kàrmàn vortex street created by the flow separation excites the tail, which exhibits a large deflection. A hyperbolic inlet profile is used to consider the no-slip initial wall boundary condition at the upper and lower computational domain. A detailed schematic regarding the analysis is shown in Fig. <ref>.
The current framework requires a few parametric initial FOM samples to extract the embedded patterns. For Turek-Hron FSI benchmark problem, seven initial FOM results are collected. The inflow speed was selected as a parameter and speeds ranging from 0.7 m/s to 1.3 m/s, in 0.1 m/s intervals were sampled. The FOM samples are analyzed using Navier-Stokes computational fluid dynamics (CFD) and finite element method (FEM) two-way FSI analysis provided in the commercial software, ANSYS. The flow field is discretized by 29,788 CFD nodes and the flexible body is discretized by 954 FEM nodes.
The ensemble of FOM results is constructed by collecting 2 s of the fully converged response in intervals of 0.01 s. The pre-acquired FOM ensemble is then subjected to interpolation by LSH-VAE shown in Table <ref>. After the training of LSH-VAE is completed, the latent code is interpolated. In the present case, the target parameter is selected as the unseen inflow speed of 0.95m/s. The latent code corresponding to 0.95m/s is acquired by the slerp interpolation shown in Algorithm <ref>. The interpolated latent code is then decoded by the decoder network where the resultant interpolated variables are generated.
§.§.§ Accuracy and efficiency
The accuracy of the current framework is assessed by comparing the results of the ROM against those obtained with the FOM. Five physical variables, dX, dY, u, v, and p are considered for interpolation in this case. Among them, the first two variables denote the grid deformation in x- and y-direction. Using the interpolated variables, the interpolated FSI field will be constructed. The interpolated FSI field and FOM are shown in Fig. <ref>.
Evaluation of the results shown in Fig. <ref> verifies that the proposed framework is reasonably accurate. Subsequently, the accuracy of LSH-VAE is compared against that of CAE and β-VAE. For comparison, the CAE and β-VAE networks are constructed using the same hyperparameters that were used for LSH-VAE. The comparison between CAE, β-VAE, and LSH-VAE is performed by comparing the extent to which their results differed from those of FOM. The discrepancy contours of various networks are shown in Fig. <ref>. The minimum and maximum of each variable are matched for the respective variable.
Overall, LSH-VAE exhibits the smallest discrepancy while β-VAE performs the worst. Interestingly, the regions that exhibit a relatively larger discrepancy are found to be quite similar for all of the networks. This is caused by the finite number of latent dimensions considered in the generative networks. Small details of FOM would have been neglected in the finite latent representation, which lead to the discrepancy in the similar areas. Another one to note is that the pressure contour of CAE and β-VAE shows a considerably larger discrepancy compared against that by LSH-VAE. This is caused by the large variation between the maximum and minimum values of the pressure. The inability of CAE and β-VAE to generate an expressive output is considered to be the reason for small details being neglected by large variations.
Then, the efficiency of the proposed framework is assessed. The computational procedures for the proposed framework comprise four stages and the computational time required for each stage is listed in Table <ref>. For Turek-Hron FSI problem, each FOM query requires 109.0 h whereas the online stage consumes 0.11 h. The proposed framework therefore exhibits a speed-up factor of 990 for each unseen parametric estimation. The expected computational time in terms of the number of computations is shown in Fig. <ref>.
§.§ Limit cycle oscillations
§.§.§ Description of the analysis
Limit cycle oscillation (LCO) is a nonlinear periodic oscillation with limited amplitude on an aerodynamic surface. LCO of an aircraft is a highly nonlinear FSI phenomenon that is caused by nonlinearities in both the fluid and structure. Typical causes of LCO include flow separation, transonic shock, geometric nonlinearity, and nonlinear stiffness of the control surface. For an aircraft, LCO may result in structural fatigue in the wings, thus requiring high-fidelity analysis for safety.
During the design stage of an aircraft, iterative LCO analysis is performed to satisfy the vibration criterion. Such parametric LCO analysis is considered to be quite cumbersome and tedious as it is highly nonlinear and involves many DOFs. In this section, the proposed framework is used to conduct a simplified nonlinear parametric LCO analysis of a wing section.
The wing section considered in this analysis is derived from that reported by O'Neil et al. <cit.>. In it, a two-dimensional wing section was constrained by the pitch and heave springs as shown in Fig. <ref>. The pitch and heave stiffnesses are nonlinear in their cubic terms, which are expressed in Eq. <ref>. LCO is caused by the cubic stiffness in the structure and LCO is observed at the inflow stream speed of 15.5 m/s to 50 m/s.
K_α = 2.57(α+500α^3)
K_h = 0.09(h+2860h^3))
The inflow speed is chosen as the parameter in this analysis. The initial FOM samples are collected by adjusting the inflow speed from 20 m/s to 45 m/s in increments of 5 m/s. The relevant flow field is discretized by 19,381 nodes and solved using the commercial Navier-Stokes solver, ANSYS. The initial FOM samples are obtained by collecting 2 s of the fully converged response in intervals of 0.01 s. The FOM ensemble is subjected to MOR and interpolation by LSH-VAE.
After LSH-VAE is trained, the latent code for the desired parameter is acquired via slerp interpolation. The target parameter is an unseen inflow speed of 32.5 m/s, and the corresponding latent code is interpolated using Algorithm <ref>. The interpolated latent code is then decoded by the decoder and the interpolated FSI field is generated.
§.§.§ Evaluation of accuracy and efficiency
The accuracy of LSH-VAE is assessed by comparing the ROM results against those produced by FOM. In this case, the five physical variables discussed in the previous section were considered. The interpolated variables were used to generate the FSI field, where the interpolated FSI field and FOM are shown in Fig. <ref>.
In Fig. <ref>, the interpolated FSI field constructed by LSH-VAE is found to be accurate. Then, the accuracy of LSH-VAE is compared against that of CAE and β-VAE. The discrepancy contours between LSH-VAE, CAE, and β-VAE are shown in Fig. <ref>. The minimum and maximum of the variable are each matched for the same variable.
Similar to Turek-Hron problem, LSH-VAE exhibits the smallest discrepancy. However in this case, β-VAE performed better than CAE. For dX, all networks exhibit a similar discrepancy, as the wing section is constrained in x-direction. Only the pitching motion affects the deformation of surrounding grids in x-direction, resulting in a small variation. dY, however, shows different behavior. The discrepancy is spread evenly as the wing heaves and LSH-VAE shows a significantly reduced discrepancy. Another important point to note is that the discrepancy regarding the pressure is quite small. This is due to the stagnation point which creates a concentrated high-pressure region.
The efficiency of the proposed framework is also assessed. The computational time required for each stage is summarized in Table <ref>. The offline FOM computation required 280.1 h including six initial FOM sample computations. LSH-VAE training required 3.52 h for the five variables of interest, resulting in a total offline stage of 283.6 h. For the online stage, FSI field reconstruction and saving to disk requires the most time as it requires 0.06 h. The present framework exhibits a speed-up factor of 660 for each unseen parametric estimation.
The expected computational time in terms of the unseen parametric queries is shown in Fig. <ref>.
§.§ Three-dimensional fluid flow
§.§.§ Description of the analysis
Finally, fluid flow surrounding a simple stationary three-dimensional (3D) cylinder is analyzed. The analysis of the 3D fluid serves to demonstrate the use of the proposed framework to analyze a system with a significant number of DOFs. A 3D cylinder with a diameter of 1 m was subjected to a uniform inflow, as shown in Fig. <ref>. Similar to Turek-Hron FSI benchmark, a von Kàrmàn vortex is formed behind the cylinder. For CFD analysis, a cuboid computational domain of 20m×10m×10m was discretized into 1,121,000 tetrahedral elements. The Reynolds number of the inflow varied from 100 to 160 in intervals of 10.
The initial FOM samples are obtained by using the ANSYS Navier-Stokes solver and 2s of FOM data are collected in intervals of 0.01 s. Then, the LSH-VAE is trained against the FOM ensemble and interpolation is performed with respect to the parameter.
After LSH-VAE is trained, the latent code representing the targeted parameter is acquired. The target parameter is selected as an unseen inflow Reynolds number of Re = 125. The latent code corresponding to Re = 125 is acquired by the interpolation shown in Algorithm <ref>. The interpolated latent code is then decoded and the resultant interpolated flow field is generated.
§.§.§ Evaluation of the accuracy and efficiency
The accuracy of LSH-VAE is assessed by comparing the results of ROM with those obtained using FOM. In this case, four physical variables, u, v, w, and p are considered for the interpolation. Using the interpolated variables, the interpolated flow field is generated. The interpolated and original flow fields are displayed in Fig. <ref>.
The interpolated flow field constructed by LSH-VAE is found to be quite accurate. Particularly, the velocity in z-direction, w, is accurately interpolated even though w exhibits quite a complex response. As the initial physical variables are interpolated well, the relationship between the variables is inspected. Comparison against CAE and β-VAE is not conducted in this case as the large number of DOF caused instability of the networks. Instead, the normalized Q-criterion is considered to assess whether the interpolated flow field preserves its vorticity. In Fig.<ref>, the normalized Q-criterion is obtained using the interpolated variables shown in Fig. <ref>. Figure <ref> shows the iso-surface generated based on the normalized Q-criterion. The iso-surface is colored by u-velocity and pressure for visualization.
The good agreement in terms of the Q-criterion indicates that LSH-VAE interpolates the direct variables sufficiently well such that the relationship between variables may be well preserved.
Lastly, the efficiency of the present framework is assessed. The computational time required for each stage is listed in Table <ref>. The offline FOM computation requires 193.7 h including the seven initial FOM samples. LSH-VAE training requires 11.3 h resulting in a total offline stage of 205.0 h. For the online stage, variable reconstruction and writing to disk requires the most time as it required 2.02 h. The proposed framework exhibits a speed-up factor of 14 for each unseen parametric estimation.
The expected computational time in terms of queries is as shown in Fig. <ref>.
§ CONCLUSIONS
This paper proposes a nonlinear data-driven parametric MOR framework based on a neural network. The present framework adopts a novel neural network, LSH-VAE, to perform parametric MOR and interpolation. The present validations demonstrates that the LSH-VAE is capable of the parametric interpolation of dynamic system while significantly reducing the computational time. The following results are obtained in this study.
* A novel machine-learning method, LSH-VAE, is developed for nonlinear MOR and the parametric interpolation of nonlinear, dynamic systems.
* LSH-VAE is assessed on three nonlinear and multiphysics dynamic systems with many DOFs. The proposed framework is proven to be accurate and to significantly reduce the computational time.
* Compared against the existing nonlinear MOR methods, convolutional autoencoder and β-VAE, LSH-VAE demonstrates significantly higher accuracy.
The performance of LSH-VAE is assessed on three nonlinear dynamic systems: FSI benchmark, LCO, and three-dimensional flow. For all of the systems, LSH-VAE is capable of constructing an accurate parametric ROM. Especially, LSH-VAE exhibited a significantly enhanced accuracy compared to CAE and β-VAE. Also, LSH-VAE is found to be effective as not only did it interpolate the variables well, but it also interpolated the vorticity with high accuracy, which is embedded in the patterns of variables. Upon the accurate parametric MOR, LSH-VAE exhibites a speed-up factor of 990, 660, and 14 respectively.
Such results are possible owing to the improvements in the LSH-VAE. First, it adopts a hierarchical structure that enables a much deeper and more stable network. Second, it adopts a hybrid weighted loss function consisting of mean-squared error and KL divergence. The use of mean-squared error improved the performance against continuous datasets while the hybrid weights reduced posterior collapse. Lastly, the use of slerp interpolation instead of linear interpolation in the latent space significantly enhanced the interpolation quality following the complex latent manifolds.
However, there still exist a few challenges to be dealt with. First, LSH-VAE may require a significant amount of video random access memory (VRAM) if it is incorporated with an extensive number of DOF. The excessive VRAM requirement stems from its deep structure. By adopting a deep structure, LSH-VAE is capable of generating an expressive result at the cost of training an extensive number of learnable nodes. The excessive VRAM requirements necessitate limiting the batch size for the 3D fluid flow example. Yet, VRAM limitations may be alleviated by adopting parallel computing and utilizing many GPUs. Splitting the DOFs into several groups and merging them after interpolation may also be considered as a solution. Second, extrapolation is limited in the proposed framework. Accurate extrapolation would require dense sampling in the parametric space. However, the construction of ROM with sufficiently dense sampling accompanied by an effective latent manifold tracking method would make reasonable extrapolation viable. Finally, the effectiveness of the proposed framework decreases as the FOM becomes simpler and increasing DOFs are involved. An example of this tendency is demonstrated in the 3D fluid flow example where the speed-up factor diminished to 14 compared to 990 and 660 in the previous cases.
In the future, the plan is to extend the evaluation of the proposed framework to various multiphysics problems such as the analysis of the heat-structure systems. Considering that the present framework is purely data-driven, LSH-VAE is expected to be used in its current form. In addition, multi-parametric analysis coupled with sampling algorithms such as Latin hypercube will be attempted by adopting conditional tokens in the latent space.
Acknowledgments
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Science, ICT and Future Planning (2023R1A2C1007352).
§ DECLARATIONS
The authors declare that they have no conflict of interest.
|
http://arxiv.org/abs/2307.05708v1 | 20230711181915 | Bayesian inference on the order of stationary vector autoregressions | [
"Rachel L. Binks",
"Sarah E. Heaps",
"Mariella Panagiotopoulou",
"Yujiang Wang",
"Darren J. Wilkinson"
] | stat.ME | [
"stat.ME",
"62F15, 62M10"
] |
Structural and magnetic properties of Fe-Co-C alloys with tetragonal deformation: a first-principle study
Mirosław Werwiński
August 12, 2023
=========================================================================================================
All vector autoregressive models have an associated order p; conditional on observations at the preceding p time points, the variable at time t is conditionally independent of all the earlier history of the process. Learning the order of the model is therefore vital for its characterisation and subsequent use in forecasting. It is common to assume that a vector autoregression is stationary. This prevents the predictive variance of the process from increasing without bound as the forecast horizon increases and facilitates various interpretations of the relationships between variables. A vector autoregression is stable if and only if the roots of its characteristic equation lie outside the unit circle, which constrains the autoregressive coefficient matrices to lie in the stationary region. Unfortunately, the geometry of the stationary region can be very complicated, and specification of a prior distribution over this region is therefore difficult. In this work, the autoregressive coefficients are mapped to a set of transformed partial autocorrelation matrices which are unconstrained, allowing for straightforward prior specification, routine computational inference, and meaningful interpretation of the magnitude of the elements in the matrix. The multiplicative gamma process is used to build a prior distribution for the unconstrained matrices, which encourages increasing shrinkage of the partial autocorrelation parameters as the lag increases. Identifying the lag beyond which the partial autocorrelations become equal to zero then determines the order of the process. Posterior inference is performed using Hamiltonian Monte Carlo via the probabilistic programming language Stan. A truncation criterion is used to determine whether a partial autocorrelation matrix has been effectively shrunk to zero. The value of the truncation threshold is motivated by classical theory on the sampling distribution of the partial autocorrelation function. The work is applied in a simulation study to investigate the agreement between the posterior distribution for the order of the process and its known value, with promising results. The model and inferential procedures are then applied to neural activity data in order to investigate ultradian rhythms in the brain.
§ INTRODUCTION
Vector autoregressive (VAR) processes are widely used to model multivariate time-series data in a variety of fields including neuroscience <cit.>, bioinformatics <cit.>, macroeconomics <cit.>, and energy economics <cit.>. In an autoregression of order p, the random variable at time t is conditionally independent of its values at lags p+1, p+2, … given observations at the preceding p time points. Indeed the random variable at time t can be expressed as a noisy linear combination of these p values. The order of the autoregression is therefore intrinsic to the characterisation of the joint model for the process. However, its value is typically not known a priori.
A common assumption when working with Gaussian time-series data is that of stationarity, which posits that the means, variances and covariances of the process do not change over time. Since the overall level of many time-series exhibits periodic or systematic variation due to seasonality or time-trends, stationarity is often implausible as an assumption when modelling the raw data. However, stationary vector autoregressions frequently form the core building block of more sophisticated models, for example for differenced data in integrated models, for innovations from a time-varying mean in a time-series regression model or simply as components in state space models which are thought to be mean-reverting. Stationarity can be enforced in vector autoregressions by restricting the autoregressive coefficient matrices to lie within a subset of the parameter space called the stationary region. From a practical perspective, this prevents the predictive variance of the process from growing without bound into the future. This is often keenly motivated, for instance in applications where the goal is long-term forecasting or when modelling the dynamics of a linear system which is assumed to be in its equilibrium distribution. Moreover, stationarity admits various interpretations of the relationships between variables through the infinite-order moving average representation of the process, for example through Granger causality networks or impulse response analysis.
In the context of univariate stationary autoregressions, Bayesian inference on the order of the process has been widely studied. Reparameterising the model in terms of its partial autocorrelations, <cit.> enforce stationarity by restricting the support of each partial autocorrelation parameter to the interval (-1, 1). A univariate stationary autoregression of order p has a non-zero partial autocorrelation at lag p and then zero partial autocorrelations at all higher lags. Therefore, by choosing a large (maximum) value for p and assigning each partial autocorrelation a spike-and-slab prior with a continuous distribution over (-1,1) and an atom of probability at zero, the authors allow inference on the order of the process. <cit.> use the same reparameterisation of the model to enforce stationarity but frame the problem of order determination as a model selection problem and use reversible jump Markov chain Monte Carlo to learn the order of the process. In <cit.>, stationarity is enforced through a different reparameterisation of the autoregression in terms of the reciprocal roots of its characteristic equation. Under this parameterisation, the process is stationary if and only if the reciprocal roots have moduli less than 1 and the order of the process is determined by the number of reciprocal roots with non-zero modulus. Priors are assigned to the real and complex reciprocal roots with atoms of probability at moduli 0 in each case, thereby allowing inference on the model order.
Due to the geometric complexities of the stationary region in the multivariate case, extensions of these ideas to learning the order of stationary vector autoregressions are rare. Indeed, efforts to enforce stationarity in Bayesian analyses of vector autoregressions have, until recently, been thwarted by the geometry of the stationary region; see <cit.> and <cit.> for the state-of-the-art. An exception is the work of <cit.> who extend <cit.> by considering a multivariate generalisation of the characteristic equation. However, because this is only available when the autoregressive matrices are diagonal, their work is limited to the class of diagonal vector autoregressive processes. Other recent work which addresses the problem of order determination in vector autoregressions includes <cit.> and <cit.> in which the autoregressive coefficient matrices are structured into a three-way tensor to facilitate structural dimension reduction. A low rank tensor decomposition is then applied, in which one margin determines the order of the model, with shrinkage process priors used to encourage parsimony. Clearly, however, these tensor models are not fully flexible if dimension reduction is imposed, being limited to subsets of the class of all vector autoregressions. Moreover, stationarity is not enforced.
In earlier work, <cit.> established a methodology for enforcing stationarity through the prior in vector autoregressions based on an unconstrained reparameterisation of the stationary model. This is constructed by mapping the original model parameters to a set of partial autocorrelation matrices and then applying a second transformation which simply scales the singular values of each of these partial autocorrelation matrices from [0,1) to the positive real line. The transformed partial autocorrelation matrices are interpretable and allow specification of a prior which is invariant with respect to the order of the components in the observation vector. Markov chain Monte Carlo (MCMC) methods for computational inference need only operate over a Euclidean space, making implementation routine, for example using Hamiltonian Monte Carlo through probabilistic programming software like Stan. However, a clear limitation with this approach is that inference is conditional on a specified value for the model order, with no account for the associated uncertainty. In this work, we provide an extension to the prior and associated procedures for computational inference which allows the order to be another unknown quantity in the model.
A Bayesian approach to quantifying uncertainty on the dimension of nested models is to fit an overparameterised model with purposefully more components than are required. By using a shrinkage prior, components that are shrunk enough to be deemed negligible in the likelihood can then be discarded. Consequently, inference on both the continuous model parameters and the model dimension are available from a single within-model MCMC sampler, without recourse to transdimensional MCMC; see, for instance, <cit.> and <cit.> in the context of mixture and factor models, respectively. Borrowing ideas from this literature, we construct a prior which increasingly shrinks the transformed partial autocorrelation matrices at higher lags towards zero and allows assessment of the lag beyond which the partial autocorrelations become essentially equal to zero. This determines the order of the process. The interpretability of the reparameterised model allows classical theory on the sampling distribution of the partial autocorrelation function to inform specification of the shrinkage prior and subsequent decision-making. To the best of our knowledge, this is the first work in the literature to address the problem of order determination from a Bayesian perspective in the general class of stationary vector autoregressions. We provide code for implementation of computational inference via Stan thereby facilitating use by a variety of practitioners across the spectrum of fields which rely on vector autoregressions for modelling and forecasting applications.
The remainder of the paper is structured as follows. In Section <ref> we discuss a reparameterisation of stationary vector autoregressive models in terms of a set of interpretable, unconstrained parameters. In Section <ref> we discuss the prior distribution assigned to the unknowns in our reparameterised model. Section <ref> considers posterior inference and the use of a truncation criterion to determine model order. In Section <ref> we apply our model and inferential procedures in a set of simulation experiments before considering an application to neural activity data in Section <ref>. Finally Section <ref> provides some concluding remarks.
§ STATIONARY VECTOR AUTOREGRESSIONS
§.§ Vector autoregressions
Without loss of generality, suppose that the m-variate process {y⃗_t } can be modelled as a zero-mean vector autoregression of order p, denoted mp,
y⃗_t = ϕ_1 y⃗_t-1 + … + ϕ_p y⃗_t-p + ϵ⃗_t,
where the errors ϵ⃗_t form a sequence of uncorrelated, zero-mean multivariate normal random vectors, ϵ⃗_t ∼_m(0, Σ). The continuous model parameters therefore comprise the autoregressive coefficient matrices ϕ_i ∈ M_m× m(ℝ), i=1,…,p, which are denoted collectively as Φ∈ M_m× m(ℝ)^p, and the error variance matrix Σ∈𝒮_m^+, where M_m× n(V) and 𝒮_m^+ denote the space of m × n matrices with entries in V and the space of m × m symmetric, positive definite matrices, respectively. Defining B as the backshift operator, such that B^k y⃗_t = y⃗_t-k, it is common to express (<ref>) as
ϵ⃗_t = (I_m - ϕ_1 B - … - ϕ_p B^p) y⃗_t = ϕ(B) y⃗_t,
in which I_m is the m × m identity matrix and ϕ(u) = (I_m - ϕ_1 u - … - ϕ_p u^p), u ∈ℂ, is referred to as the characteristic polynomial. A vector autoregression is stable if and only if all the roots of {ϕ(u) } = 0 lie outside the unit circle. Since all stable processes are stationary, and unstable stationary processes are not generally of interest, this is often referred to as the stationarity condition for Φ and the subset of M_m× m(ℝ)^p over which the condition is satisfied is referred to as the stationary region, denoted 𝒞_p,m.
§.§ Reparameterisation over the stationary region
The geometry of the stationary region 𝒞_p,m becomes increasingly complex as either p or m increase. With no standard distributions over 𝒞_p,m, this complicates the process of specifying a prior that conveys meaningful information, for example, concerning the relative sizes of the autocorrelations at different lags. Moreover, it is difficult to design an efficient MCMC sampler which targets a distribution with support constrained to 𝒞_p,m. In recent work, <cit.> proposes a solution which addresses both issues, reparameterising the model over the stationary region in terms of a set of interpretable, unconstrained parameters. The reparameterisation involves two bijective mappings. First, the original model parameters (Σ, Φ) ∈𝒮_m^+ ×𝒞_p,m are mapped to a new parameter set {Σ, (P_1, …, P_p) }∈𝒮_m^+ ×𝒱^p in which 𝒱 denotes the subset of matrices in M_m× m(ℝ) whose singular values are less than one. The matrix P_s+1 is referred to as the (s+1)-th partial autocorrelation matrix. It is defined as the conditional cross-covariance matrix between y⃗_t+1 and y⃗_t-s given y⃗_t, …, y⃗_t-s+1 which has been standardised through
P_s+1 = Σ_s^-1/2( y⃗_t+1, y⃗_t-s | y⃗_t, …, y⃗_t-s+1) Σ^* -1/2_s,
s=0,…,p-1, in which Σ_s and Σ_s^* are the conditional variances
Σ_s = (y⃗_t+1 | y⃗_t, …, y⃗_t-s+1) and Σ_s^* = (y⃗_t-s | y⃗_t-s+1, …, y⃗_t)
and Σ^1/2 denotes the symmetric matrix-square root. Full details of the mapping and its inverse, which proceed by recursion, are described in <cit.>.
A second transformation then maps each partial autocorrelation matrix P∈𝒱 to an unconstrained square matrix A∈ M_m × m(ℝ) through
A = (I_m - PP^)^-1/2P.
Denoting the singular value decomposition of P by P=U(r_1, …, r_m)V^ in which the singular values satisfy 1 > r_1 ≥ r_2 ≥⋯≥ r_m ≥ 0, the corresponding factorisation of A is given by A=U(r̃_1, …, r̃_m)V^ where r̃_i = r_i / (1 - r_i^2)^1/2≥ 0, i=1,…,m. Therefore the second transformation can be regarded as an orientation-preserving mapping which simply scales the singular values of P from [0,1) to the positive real line.
§ PRIOR DISTRIBUTION
§.§ Shrinkage prior for transformed partial autocorrelations
The relationship between the singular value decompositions of P and A, described in the previous section, has two important implications. First, the spectral norms of P and A, r_1 = P_2 and r̃_1 = A_2, are clearly related through the monotonic mapping: r̃_1 = r_1 / (1 - r_1^2)^1/2. The relative sizes of the unconstrained parameters A_s across lags s=1,…,p therefore relate directly to the relative sizes of the partial autocorrelation matrices P_s across lags. Second, P=0_m if and only if A=0_m in which 0_m denotes the m × m matrix of zeros. It follows from the definition of the partial autocorrelation matrices that for k < p, P_k 0_m and P_k+s = 0_m for s=1,…,p-k if and only if ϕ_k 0_m and ϕ_k+s = 0_m for s=1,…,p-k. The order of a mp process is therefore k < p if and only if A_k 0_m and A_k+s = 0_m for s=1,…,p-k. Under the unconstrained parameterisation, it follows that the model of order k < p is nested within the model of order k + 1. As discussed in Section <ref>, we can therefore borrow ideas from the literature on overfitted models by adopting a shrinkage prior for A_1, …, A_p_max with a large value for p_max. This allows learning about the lag beyond which the A_s can be taken as zero matrices, thereby informing inference on the order p of the process. It can therefore be regarded as an extension of the prior in the univariate case that uses spike-and-slab distributions for the partial autocorrelation parameters <cit.>. Moreover, we can convey the very reasonable idea that the partial autocorrelations at higher lags are likely to be smaller than those at lower lags by choosing a shrinkage prior for the A_s, s=1,…,p_max, whose degree of shrinkage increases with the lag s.
A popular increasing shrinkage prior is the multiplicative gamma process (MGP) <cit.> originally developed as a structured sequence of global-local shrinkage priors for the loadings matrix in infinite factor models. Denoting the (i,j)th element in A_s by a_s,ij we adopt a prior of this form by choosing
a_s,ij | λ_s,ij,τ_s ∼(0,λ_s,ij^-1τ_s^-1),
independently for i,j=1,…,m, s=1,…,p_max, where the local precision parameters at lag s are assigned the prior
λ_s,ij∼(a/2, a/2),
independently for i,j=1,…,m, s=1,…,p_max, and the global precision parameter at lag s is constructed as
τ_s = ∏_k=1^s δ_k, δ_1∼(a_1, 1), δ_k∼(a_2, 1), k ≥ 2
in which the δ_k are independent. The global precisions τ_s are therefore a cumulative product of gamma random variables whose prior expectation (τ_s) increases with s when a_2 > 1. Guidelines on the choice of hyperparameter a_1 and a_2 can be found in <cit.> who presents a numerical method for checking that the global variances θ_s = 1 / τ_s are stochastically decreasing in s near zero, that is, {θ_s ∈ (0,θ]} is non-decreasing in s for any θ in a small neighbourhood of zero.
The multiplicative gamma process prior does not place any mass at zero and so none of the A_s, and hence P_s, matrices are shrunk exactly to zero. We define the effective order p^∗ of the model as the value of s ≤ p_max such that P_s fails a criterion for truncation to zero when s=p^∗ but passes for s=p^∗+1,…,p_max. Applying the truncation criterion to the standardised P_s matrices, rather than the unconstrained A_s matrices, allows classical theory from univariate time-series analysis to inform our judgement in a manner which is robust with respect to the scale of the data as well as its length and dimension m. Further details on the truncation criterion are provided in Section <ref>.
An alternative increasing shrinkage prior is the cumulative shrinkage process (CUSP) <cit.>. However, our experience working with this prior suggests that, though sensible posterior inferences can be obtained in the analysis of simulated data, inference on the model order is very sensitive to the choice of prior hyperparameters in analyses involving real data. This suggests a lack of robustness to the kind of model misspecification that is inevitable in analyses of real time-series.
§.§ Joint prior
Denoting the collection of unknown hyperparameters in the multiplicative gamma process prior by ϑ⃗, we adopt an overall prior specification of the form
π(Σ, A_1, …, A_p_max, ϑ⃗) = π(Σ) π(ϑ⃗) ∏_s=1^p_maxπ(A_s | ϑ⃗).
Various options are available for the error variance matrix Σ and distributions which offer the property of invariance with respect to the order of the variables in the observation vector are discussed in <cit.>. In the applications in this paper, we use one such distribution, taking Σ to be inverse Wishart, with a scale matrix that has a common element on the diagonal and a common element off the diagonal.
§ POSTERIOR INFERENCE
§.§ Posterior distribution
For i ≤ j, denote by y⃗_i:j the time-series y⃗_i, …, y⃗_j. The likelihood for a series of n observations, y⃗_1:n, from a zero-mean mp_max process can be expressed as
p(y⃗_1:n|Σ, Φ) = p(y⃗_1:p_max|Σ, Φ) ∏_t=p_max+1^n p(y⃗_t |y⃗_(t-p_max):(t-1), Σ, Φ)
in which Y⃗_t |y⃗_(t-p_max):(t-1), Σ, Φ∼_m(∑_i=1^p_maxϕ_i y⃗_t-i , Σ) and the initial distribution is (Y⃗_1^, …, Y⃗^_p_max)^|Σ, Φ∼_mp_max(0, G). Here G is given by
G = [ Γ_0 Γ_1 ⋯ Γ_p_max-1; Γ_1^ Γ_0 ⋯ Γ_p_max-2; ⋮ ⋮ ⋱ ⋮; Γ_p_max-1^ Γ_p_max-2^ ⋯ Γ_0; ],
where the matrices Γ_0, …, Γ_p_max-1 are available as by-products of the recursive mapping between the partial autocorrelation matrices and the original model parameters.
Regarding the likelihood as a function of the new parameters and combining it with the prior (<ref>) via Bayes theorem yields the posterior distribution as
π(Σ, A_1, …, A_p_max, ϑ⃗|y⃗_1:n) ∝π(Σ) π(ϑ⃗) ∏_s=1^p_maxπ( A_s |ϑ⃗ ) p(y⃗_1:n|Σ, A_1, …, A_p_max).
As explained in <cit.>, the posterior distribution is a complicated function of the A_s, making it ill-suited to computational inference based on Gibbs sampling. Rather than appealing to conditional independence structure in the posterior for one-at-a-time parameter updates, Hamiltonian Monte Carlo (HMC) <cit.> uses information on the slope of the logarithm of the posterior density to generate global proposals that update all parameters simultaneously. We have found it efficient in sampling from the posterior (<ref>) and use <cit.>, a lightweight R interface to the Stan software <cit.>, to implement the HMC algorithm. Stan requires users to write a program in the probabilistic Stan modelling language, the role of which is to provide instructions for computing the logarithm of the kernel of the posterior density function. The Stan software then automatically sets up a Markov chain simulation to sample from the resulting posterior. This includes calculation of the gradient of the logarithm of the posterior density, random initialisation of the chains, and the tuning of the sampler.
§.§ Truncation criterion
Following <cit.>, we choose to truncate P_s to a zero matrix if the absolute value of all of its elements lie below some threshold, say ε. In classical time-series analysis, the partial autocorrelation plot, with its associated confidence intervals, plays an important role in the choice of order for a univariate autoregression. Under the hypothesis that the process is p, the estimators for the partial autocorrelations of order p+1,p+2,… based on a sample of size n are approximately independent with mean equal to zero and variance equal to 1/n. As a guide, we therefore approximate the posterior for the m^2 components p_s,ij of P_s under this hypothesis as independent (0, 1/n) random variables and then compute the quantile q_m(β) such that {max_i,j | p_s,ij | < q_m(β) } = β for some large value of β and set the threshold ε=q_m(β)=Φ^-1{ (β^1/m^2+1) / 2 } / √(n). For example, in the applications in Sections <ref> and <ref>, we use β=0.99. By choosing the threshold in this way, we account for both the length and dimension of the data, in addition to operating on a parameter which is unit-free.
For each draw from the posterior, we can apply this criterion to determine the effective order p^* of the process. This can be summarised to yield a numerical approximation of the posterior for p^* which provides a proxy for the posterior for p.
§ SIMULATION EXPERIMENTS
Consider the idealised setting in which we know that the data were generated from a stationary vector autoregression of known order, p. In order to explore the behaviour of the posterior distribution for p^* in this context, we carried out simulation experiments that considered data generated from processes whose orders took various values. Our choice of truncation criterion makes allowance for the dimension of the observation vector m and the length of the time-series n. We might therefore expect some degree of robustness in the more challenging inferential situations when n is small or, in particular, when m is large. This was investigated by considering simulations under a variety of values of m and n.
For each m ∈{1, 3, 5, 7} and p ∈{1, 2, 3, 4 } we simulated ten sets of m × m matrices A_1, …, A_p with elements sampled independently from a standard normal distribution. Taking the error variance matrix to be Σ=I_m, these were used to simulate ten mp processes of length n = 1000. Conditional on each data set, we then generated samples from the posterior distribution using Stan, as described in Section <ref>, setting the maximum possible order as p_max = 8. Values for the other hyperparameters in the prior are detailed in the Supplementary Materials. In all cases, we used four chains each with 1000 iterations of warm-up followed by 4000 sampling iterations. Using the truncation criteria with β = 0.99, we calculated the limits q_m(β) as 0.081, 0.103, 0.112 and 0.117 for m=1, 3, 5, and 7 respectively, and obtained a posterior mass function for the effective order p^* of each process. The posterior mass functions are summarised in Figure <ref> across all simulation experiments. For a given (m, p), the posteriors for the ten data sets are presented as a collection of overlaid bar charts. In nearly all cases, the true order p of the process is the mode in the posterior for p^*, with considerable posterior support. The results are similar across different values of m and p, suggesting robustness to the dimension of the data through our choice of truncation criteria.
Fixing m=3, considering p ∈{1,2,3,4} and using the same ten sets of matrices A_1, …, A_p as in the previous experiment, we then simulated ten mp processes of length n = 100 and another ten of length n = 500, facilitating comparison across n ∈{ 100, 500, 1000}. Retaining the same prior specification in the new experiments, we fit the model using HMC via Stan, as discussed above. Again, using the truncation criteria with β = 0.99 led to limits q_m(β) equal to 0.326, 0.146 and 0.103 for n = 100, 500 and 1000, respectively. This yielded the posterior mass functions for p^* which are displayed in Figure <ref>. Across all experiments for the different values of n, the posterior mode for the effective order p^* recovers the true order p of the process, again, with considerable support. This holds for all values of n, suggesting robustness through the choice of truncation criteria, even for short time-series.
§ APPLICATION: UNDERSTANDING BRAIN RHYTHMS
§.§ Background
As an example application, we will apply our model and inferential procedures to a dataset of long-term intracranial EEG recordings to understand biological rhythms in the brain. Biological rhythms on ultradian, circadian, and longer timescales have been demonstrated in human physiology; but particularly the ultradian rhythms remain elusive in mechanism and function in the brain <cit.>. Multiple lines of evidence suggest that some prominent ultradian rhythms exist in brain activity as measured by EEG <cit.>, and may be related to rest-activity cycles, or even modulate disease symptoms.
In this exploratory application we investigate the properties that such ultradian biological rhythms may display in human brain activity. We use band power in two common frequency bands (delta and beta) as our features of interest.
§.§ Data preprocessing
Intracranial EEG recordings are considered from four subjects with refractory focal epilepsy from the University College London Hospital (UCLH). We give the individuals the anonymous identities of A, B, C and D. The nature of the recording was chosen for its high signal-to-noise ratio without the need for extensive artefact detection and removal.
Firstly, we divided each subject's iEEG data into non-overlapping, consecutive segments of length 30 seconds. All channels within each segment were re-referenced to a common average reference. In the common average calculation, channels with extreme amplitude values were excluded. A notch filter was then applied at 50 Hz for each 30 second time window to remove line noise, after which the time windows were band-pass filtered from 0.5-80 Hz using a fourth order zero-phase Butterworth filter (second order forward and backward filter applied) and downsampled to 200 Hz.
Next, the iEEG data were decomposed into commonly studied frequency bands, including delta and beta <cit.>. We calculated the iEEG band power for each 30 second segment for all channels in two frequency bands (δ: 1-4 Hz, β: 13-30 Hz) using Welch's method with three-second non-overlapping windows. After taking logarithms to base 10 of the band power recordings in each channel, the channels were averaged into the brain regions from which they were recorded based on the Desikan-Killiany atlas; see <cit.> for further details. The number of brain regions varied between individuals, with m = 9, 8, 8 and 13 for individuals A, B, C and D respectively. Finally, the data were mean-centered prior to analysis.
For each individual we analysed the longest possible contiguous time period of their band power time-series for which graphical interrogation of the data suggested stationarity was a plausible assumption; therefore the length of the recording chosen for further analysis varied across subjects. The number of observations in the recordings used were n= 651, 622, 685 and 231 for individuals A, B, C and D respectively, equivalent to 5.417, 5.175, 5.7 and 1.917 hours. These recordings were obtained during day-time hours.
We apply our model and inferential procedures to the time-series of both the beta and delta band power values for each individual, with a maximum order of p_max = 8. The choices of hyperparameters in the prior are provided in the Supplementary Materials.
§.§ Order determination
For each of the individuals, the posterior distributions for p^* for both the beta and delta series were calculated using the truncation criteria described in Section <ref> with β = 0.99. For example, in Figure <ref>, the posterior mass functions for the data pertaining to individual A are shown. For both the delta and beta series, the posterior mode is 2, with posterior support exceeding 2/3. These results are quantitatively similar across all individuals, with corresponding figures displayed in the Supplementary Materials, possibly indicating similar generative processes for their ultradian rhythms.
§.§ Granger causality
Conditioning on the modal order of the process for both series in each patient, we obtain samples from the posterior distributions of the autoregressive coefficient matrices. The (i,j)-th element in the autoregressive matrix at lag-s, ϕ_s,ij, governs the effect of the j-th variable at time t-s on the i-th variable at time t. If ϕ_s,ij is non-zero we say that variable j Granger-causes variable i at lag s; this causal connection can be represented in a directed network, called a Granger causality plot, through an edge from vertex j to vertex i. Conditional on the posterior modal order, p^∗=2, Figures <ref> and <ref> show the Granger causality plots at lags 1 and 2 for individual A in the beta and delta bands, respectively. In these plots, an autoregressive coefficient is visualised as non-zero whenever zero lies outside the 50% equi-tailed Bayesian credible interval; the thickness of the edges representing non-zero coefficients are representations of the absolute value of the posterior mean. The coordinates of the vertices, representing the different brain regions, correspond to the x and y coordinates of the centre of the region using the Deskian-Killiany atlas. A noticeable feature of these Granger causality plots is the higher number of connections in the delta band compared to the beta band. This was common across all individuals (see Supplementary Materials), and may indicate more localised processes underpinning the delta rhythms that interact with each other, whereas the beta rhythms in each region may be more driven by common processes. However future work has to confirm if this feature is a result of the epilepsy, or medication.
§.§ Decomposition into latent series
Using classic theory of time-series decompositions <cit.>, a VAR_m(p) process can be decomposed into pm latent series. These series correspond to the pm distinct eigenvalues of the companion matrix which arises from the representation of the model as a VAR_mp(1) process. Suppose there are c complex conjugate pairs of eigenvalues denoted r_j e^± i ω_j, j=1,…,c, and pm - 2c real eigenvalues denoted r_j, j=2c+1, …, pm where r_j > 0 and ω_j ∈ [0, π). The latent decomposition of y⃗_t = (y_t1, …, y_tm)^ then takes the form
y_ti = ∑_j=1^c z_tij + ∑_j=2c+1^pm x_tij
where z_tij and x_tij are real-valued processes corresponding to the jth pair of complex eigenvalues and the jth real eigenvalue, respectively. The process z_tij follows an ARMA(2,1) structure with AR coefficients 2 r_j cosω_j and -r_j^2 and is therefore quasi-periodic with characteristic frequency ω_j and modulus r_j. This holds for all dimensions i=1,…,m, though the time-varying amplitude and phase are different for each i. Similarly, the process x_tij follows an AR(1) structure with coefficient r_j for all i=1,…,m. Clearly the innovations that drive these latent ARMA(2,1) and AR(1) are correlated and arise from the error terms ϵ⃗_t in the original model.
The quasi-periodic series arising from the complex conjugate pairs of eigenvalues are of particular interest as they can capture the cyclical patterns that are key to understanding variation in brain activity. The pairs of complex eigenvalues, r_j e^± i ω_j, j=1,…,c, are not identifiable as the model remains unchanged under any permutation of their labelling. However, identification can be achieved by applying an ordering constraint, for example, based on the modulus or the argument. Imposing the constraint ω_1 < ω_2 < ⋯ < ω_c, the quasi-periodic series z_tij are ordered by decreasing period 2 π / ω_j.
For individual A the posteriors for the periods and moduli of the first four quasi-periodic series are presented in Figures <ref> and <ref>. We note that the z_tij with highest period also have highest modulus and might therefore be regarded as the dominating latent series. Corresponding figures for the other individuals are provided in the Supplementary Materials. Across individuals, a common feature is that the posterior for the period of the dominating latent series in each band has its mean at around 20 minutes; for example, for individual A, the posterior means in the beta and delta bands are 19.61 and 26.92 minutes, with 95% equi-tailed Bayesian credible intervals of (3.86, 80.88) and (4.535, 111.871) minutes, respectively. It is also noticeable that though there are some differences between the moduli of the series in the delta band compared to the beta band, there is very little difference between the corresponding periods. Again, this feature is replicated across all individuals. We elaborate further on this observation in Section <ref>.
§ DISCUSSION
We have proposed a hierarchical Bayesian model, with accompanying model-fitting methodology, which allows inference on the order of a stationary vector autoregression. This is based on an unconstrained reparameterisation of the stationary model in terms of a set of transformed partial autocorrelation matrices <cit.> whose properties can be exploited in the design of the prior. In particular, we capitalise on the nested structure of the new parameterisation by constructing an overparameterised hierarchical model which shrinks unnecessary, high-order terms to zero; by identifying the lag beyond which the partial autocorrelation parameters become effectively equal to zero, we can then learn the order of the process. Further, using the relationship between the spectral norm of a partial autocorrelation matrix and its unconstrained counterpart, the prior is chosen to increasingly shrink the partial autocorrelation matrices at higher lags towards zero through a multiplicative gamma process for the unconstrained matrices.
An efficient Hamiltonian Monte Carlo sampler for computational inference was proposed and implemented through Stan, with accompanying code to allow easy dissemination into other fields. The interpretability of the reparameterisation allowed use of classical theory on the distribution of the estimators of the partial autocorrelation function to make a judgement about which sampled partial autocorrelation matrices are approximately equal to zero. An associated truncation criteria determines the number of non-zero partial autocorrelation matrices, allowing posterior inference on the order of the process in a manner which accounts for the scale, dimension and length of the time-series.
We applied our methodology to a series of simulation experiments in which data sets of various lengths n were sampled from various stationary mp models. For all values of m, p and n considered, the posterior for the effective order of the process was highly concentrated around the known model order. We then applied our methodology to iEEG data from recordings at various locations in the brain. Conditioning on the posterior modal order of these processes allowed physiological insight in a number of directions. By constructing Granger causality plots, we were able to highlight relationships between activity in different regions of the brain. Similarly, by constructing the latent decomposition of the series, we were able to identify underlying quasi-periodic structure. In particular, we found that the dominant latent component had a period that was around 20 minutes across all individuals in both the beta and the delta bands. This is consistent with ultradian rhythms of around 20 minutes which have previously been observed <cit.>. The similarity in the periods across the beta and delta bands indicate that there is a global change in the band power pattern, rather than a local change within a specific band. The similarities between subjects are striking, particularly the period of 20 minutes, and warrants future investigations into the possible biological mechanisms and potentially endogenous drivers <cit.>. However, as we only considered four subjects in this work, a larger study would be needed to confirm any biological interpretations, with a larger number of patients, longer recordings and accounting for the potential pathology present in these subjects.
An obvious limitation in the application to iEEG data was the necessity to pick out contiguous segments of data where stationarity was a plausible assumption. However, as remarked in Section <ref>, stationary autoregressions often serve as building blocks in the construction of more complex models. Motivated by applications involving iEEG data where subjects transition between states of wakefulness and sleep, or states of normal brain activity and seizure, we are currently exploring a hidden Markov model in which a (locally) stationary vector autoregression describes the within-state dynamics. Such a model would be ideally suited to a wide variety of time-series where there are occasional step-changes in a process which otherwise appears to be mean reverting.
§ ACKNOWLEDGEMENTS
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC), Centre for Doctoral Training in Cloud Computing for Big Data (grant number EP/L015358/1). This work was also supported by the EPSRC (grant number EP/N510129/1) via the the Alan Turing Institute project “Streaming data modelling for real-time monitoring and forecasting”.
SUPPLEMENTARY MATERIAL
Supporting information: Further details on the prior specification for the simulated and EEG applications, as well as additional plots from the EEG application. (.pdf file)
Data, R code and Stan program: The simulated data from Section <ref>, R scripts for simulating data, running Stan and post-processing the Stan output, as well as the Stan program for fitting the model can be found at the GitHub repository <https://github.com/rachelbinks/Bayesian-VAR-order-determination>. (url)
chicago
|
http://arxiv.org/abs/2307.04382v1 | 20230710072704 | Experimental verification of bound and multiparticle entanglement with the randomized measurement toolbox | [
"Chao Zhang",
"Yuan-Yuan Zhao",
"Nikolai Wyderka",
"Satoya Imai",
"Andreas Ketterer",
"Ning-Ning Wang",
"Kai Xu",
"Keren Li",
"Bi-Heng Liu",
"Yun-Feng Huang",
"Chuan-Feng Li",
"Guang-Can Guo",
"Otfried Gühne"
] | quant-ph | [
"quant-ph"
] |
These authors contributed equally to this paper.
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
These authors contributed equally to this paper.
Peng Cheng Laboratory, Shenzhen 518055, China
Institut für Theoretische Physik III, Heinrich-Heine-Universität Düsseldorf, Universitätsstr. 1, 40225 Düsseldorf, Germany
Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str. 3, 57068 Siegen, Germany
Fraunhofer Institute for Applied Solid State Physics IAF, Tullastr. 72, 79108 Freiburg, Germany
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
Peng Cheng Laboratory, Shenzhen 518055, China
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
[email protected]
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
[email protected]
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
[email protected]
Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str. 3, 57068 Siegen, Germany
In recent years, analysis methods for quantum states based on randomized measurements have been investigated extensively. Still, in the experimental implementations these methods were typically used for characterizing strongly entangled states and not to analyze the different families of multiparticle or weakly entangled states. In this work, we experimentally prepare various entangled states with path-polarization hyper-entangled photon pairs, and study their entanglement properties using the full toolbox of randomized measurements. First, we successfully characterize the correlations of a series of GHZ-W mixed states using the second moments of the random outcomes, and demonstrate the advantages of this method by comparing it with the well-known three-tangle and squared concurrence. Second, we generate bound entangled chessboard states of two three-dimensional systems and verify their weak entanglement with a criterion derived from moments of randomized measurements.
Experimental verification of bound and multiparticle entanglement with the randomized measurement toolbox
Otfried Gühne
August 12, 2023
==========================================================================================================
§ INTRODUCTION
Quantum entanglement is one of the most prominent non-classical features of
quantum mechanics and often viewed as a resource in quantum information processing <cit.>. Its generation and characterization is of growing interest from both practical and fundamental perspectives. While deciding whether a given quantum state is entangled or not is in general a hard task <cit.>, many experimentally feasible schemes exist that verify entanglement in some states.
A prominent example for such schemes are entanglement witnesses, which allow for rather simple detection of entanglement using few measurements, whereas other schemes detect non-locality by evaluating some Bell-type inequalities <cit.>. On the experimental side, numerous entangled states have been generated and multi-qubit entanglement <cit.>, high-dimensional
entanglement of two particles <cit.>, and also bound entanglement <cit.> has been characterized.
When applying the standard criteria in a practical experiment, however, one always needs to align the local measurement settings strictly or to make some assumptions
on the target state to prepare, e.g., by tailoring a witness specifically for
states close to some fixed target state. To remedy this, several schemes based
on the moments of randomized correlations have been proposed
<cit.>. They provide an efficient way to characterize multi-particle correlations in states without prior knowledge about the state, nor any alignment of measurement directions. Recently, it has been shown that this approach also allows for the detection of bound entanglement <cit.>.
In this paper, we implement in a photonic setup the randomized measurement scheme to detect entanglement in mixtures of three-qubit GHZ and W-states using second moments of the random outcomes. Furthermore, we prepare bound entangled chessboard states of two qutrits and show their entanglement by evaluating an
entanglement criterion which is based on the second and fourth
moment of a randomized measurement outcome, without implementing the
random unitaries explicitly. This demonstrates that the criterion
from Ref. <cit.> is indeed strong enough to capture this weak form of entanglement, even in the presence of noise and experimental imperfections. Our implementation combines the photon's polarization and path degrees of freedom to generate precisely controlled high-dimensional states and demonstrates the versatility and efficiency of the randomized measurement approach.
§ THEORY
In the randomized measurement scheme <cit.>, a subset S⊂{1,…,n} of the parties of an n-partite quantum state ρ of fixed local dimension d is measuring some fixed, local observables in random directions. The moments of the distribution of measurement results can be written as
ℛ_S^(t) = ∫dU_1 …dU_n ⟨ U_1 τ_1 U_1^†⊗…⊗ U_n τ_n U_n^†⟩_ρ^t,
where the τ_i denote the local observables, and τ_i = whenever i∉ S.
The integrals are evaluated over the Haar measure of the unitary group 𝒰(d).
In case of qubit systems, one usually chooses τ_i = σ_z for i∈ S, in which case the second moments (t=2) are related to the purities of the reduced states of ρ. The sum of second moments for all subsets S of size | S| = k is proportional to what is known as the k-sector length of the state <cit.>. In particular, for three qubits the sector lengths A_k are given by
A_1 =3(ℛ_A^(2)+ℛ_B^(2)+ℛ_C^(2)),
A_2 =9(ℛ_AB^(2)+ℛ_AC^(2)+ℛ_BC^(2)),
A_3 =27ℛ_ABC^(2).
Decomposing ρ in terms of the local Pauli basis {σ_0 = , σ_1 = σ_x, σ_2 = σ_y, σ_3 = σ_z}, yields
ρ_ABC=1/8∑_i,j,k=0^3 α_ijkσ_i⊗σ_j⊗σ_k
and allows to express the sector lengths in terms of the coefficients α_ijk as follows:
A_1 = ∑_i=1^3 (α_i00^2 + perm.),
A_2 = ∑_i,j=1^3 (α_ij0^2 + perm.), and
A_3 = ∑_i,j,k=1^3 α_ijk^2.
In terms of the sector lengths, several entanglement criteria exist that detect certain entangled states. To proceed, let us recall that a three-particle state ρ_ABC is called biseparable for a partition A|BC if
ρ_A|BC
= ∑_k q_k^A ρ_k^A ⊗ρ_k^BC,
where the positive coefficients q_k^A form a probability distribution.
Similarly, the biseparable states ρ_B|CA and ρ_C|AB can be defined.
Moreover, we can consider the mixture of biseparable states for all partitions as
ρ_bisep
= p_A ρ_A|BC + p_B ρ_B|CA + p_C ρ_C|AB,
where p_A, p_B, p_C are probabilities.
A quantum state is called genuinely multipartite entangled (GME) if it cannot be written in the form of ρ_bisep.
For three-qubit states, if A_3>3, the state must be GME (the maximal value being A_3=4 for the GHZ state |GHZ⟩=1/√(2)(|000⟩+|111⟩).
A stronger version exists, which states that if
A_2 + A_3 > 3(1+A_1),
the state cannot be biseparable w.r.t. any fixed partition, and strong numerical evidence exists that in that case, even GME states must be present <cit.>.
In this paper, we aim to detect entanglement in a mixture of a GHZ and a W state, given by
ρ(g) = g|GHZ⟩⟨GHZ|+(1-g)|W⟩⟨W|,
where g∈[0,1] denotes the amount of mixing and |W⟩ = 1/√(3)(|001⟩ + |010⟩ + |100⟩).
The family of states ρ(g) exhibits some interesting properties. First, it is supported in the symmetric subspace. This implies that
F_XYρ(g) = ρ(g)F_XY = ρ(g),
where
F_XY = ∑_i,j|ij⟩⟨ji|_XY
is the flip (swap) operator acting on the subsystems XY∈{AB, BC, CA}.
It is known that if a state lives in
the symmetric subspace, it is either fully separable or GME
<cit.>.
However, the experimentally generated version of the state ρ(g) cannot be assumed to have the symmetry due to experimental imperfections. Accordingly, the generated state can become biseparable, thus, we employ the criterion in Eq. (<ref>) to detect its entanglement.
We stress again that the criterion in Eq. (<ref>) has been conjectured to imply the presence of GME from numerical evidence, but its analytical proof has not yet been provided <cit.>.
That is, even if the criterion Eq. (<ref>) is verified experimentally, the state may be entangled for any fixed partition, but it can be a mixture of at least three biseparable states for different bipartitions.
Second, when the parameter g is outside the region of 0.297 ≤ g ≤ 0.612, the criterion in Eq. (<ref>) is satisfied.
This parameter region is very close to other well-known regions using two other entanglement measures <cit.>.
On the one hand, the three-tangle τ vanishes for 0≤ g≤ g_τ≈ 0.627, where τ measures residual (three-partite) entanglement that cannot be expressed as two-body entanglement <cit.>.
Note that the GHZ state maximizes the three-tangle, while it vanishes for the W state.
On the other hand, the sum of squared concurrences C_A|B^2 + C_A|C^2 vanishes for g_C ≈ 0.292 …≤ g≤ 1,
where the concurrence C_X|Y measures bipartite entanglement in the reduced state between the parties X and Y <cit.>.
Hence, we can conclude that the criterion in Eq. (<ref>) can detect the multi-partite entanglement of ρ(g) even in regions where the three-tangle and the concurrence vanish, if the parameter g satisfies
g_C ≤ g < 0.297 or
0.612 < g ≤ g_τ.
In contrast to qubit systems, the second moments of higher-dimensional states are not automatically related to sector lengths. In fact, the choice of the local observables influences which local unitary invariants can be extracted from the moments <cit.>. Let us expand a bipartite quantum state of dimension d in terms of some local, hermitian operator basis {λ_i}_i=0^d^2-1 with λ_0 =, (λ_iλ_j) = dδ_ij, such as the Gell-Mann basis <cit.>. Then
ρ = 1/d^2[⊗ + ∑_i=1^d^2-1 (α_i λ_i ⊗ + β_i ⊗λ_i) + ∑_i,j=1^d^2-1T_ijλ_i ⊗λ_j]
is called the generalized Bloch decomposition of ρ, where the matrix T is known as the correlation matrix of ρ. For this matrix, many entanglement criteria exist, most notably the de Vicente criterion <cit.>, stating that for separable states, (| T|) ≤ d-1. While the left-hand side is not directly accessible from the moments of randomized measurements, it is possible to obtain related quantities by carefully choosing the observables τ_i as detailed in Ref. <cit.>, such that
ℛ^(2)_AB=tr(TT^†)/(d-1)^2
ℛ^(4)_AB=[1/3tr(TT^†)/(d-1)^2+2/3tr(TT^† TT^†)]/(d-1)^4.
For example, for d=3, τ_i = diag(√(3/2), 0, -√(3/2)).
The combined knowledge of these two quantities allows to detect entanglement, whenever it is incompatible with the de Vicente criterion, i.e., if the measured value of ℛ^(4)_AB is below the minimum given by
min ℛ^(4)_AB
s.t. ℛ^(2)_AB = measured, tr(|T|)≤ d-1.
Note that this lower bound can also be calculated analytically <cit.>.
Interestingly, there exist states which have a positive partial transpose, but can be detected to be entangled by these two moments, implying bound entanglement. A 3× 3-dimensional state from the chessboard family of bound entangled states described in Ref. <cit.> (also see Appendix C2 in <cit.>) has been identified to violate it extremely, which makes it a good candidate to prepare and detect its entanglement experimentally. It is given by
ρ_ch=N∑_i=1^4|V_i⟩⟨V_i|,
where N=1/∑_i⟨V_i|V_i⟩^2=1/4
is a normalization factor and
|V_1⟩=1/√(6)(|0⟩+2|2⟩)|0⟩+1/√(6)|11⟩,
|V_2⟩=1/√(6)(-|0⟩+2|2⟩)|1⟩+1/√(6)|10⟩,
|V_3⟩=1/√(6)|0⟩(-|0⟩+2|2⟩)+1/√(6)|11⟩,
|V_4⟩=1/√(6)|1⟩(|0⟩+2|2⟩)+1/√(6)|01⟩.
§ EXPERIMENTAL SETUP
We proceed with a description of the experimental implementation. The GHZ-W mixed states are prepared by resorting to the states entangled in polarization degree of freedom (d.o.f.) and path d.o.f. of the photon (that is, hyper-entangled) and with methods similar to the ones in Refs. <cit.>. More detailed information about the state preparation of this family of states is given in Appendix A.
When preparing the bound entangled chessboard state, it is important to ensure that all its eigenvalues remain non-negative under partial transposition. However, the chessboard state is not of full rank. Affected by the imperfections of
the experiment, slightly negative eigenvalues of the partial transposition are likely to appear. A more robust way is to prepare
the state with a level of white noise <cit.>,
ρ_ch(p)=(1-p)ρ_ch+p𝕀16.
First, let us briefly review the state preparation procedure.
As depicted in Fig. <ref>, we generate polarization
entangled (2×2 entangled) photon pairs through a spontaneous
parametric down-conversion (SPDC) process. Subsequently, we expand the dimensionality of the system by introducing the path modes u and l. This will results in three modes: H_u, V_u, and H_l, where H_u represents a horizontally polarized photon occupying path u, and so on. Finally, specific operations are applied to the system to steer the state to the target ones.
Specifically, a Half-Wave Plate (HWP) H1 with the optic axis placed at 12.05^∘ is used to rotate a 390 nm horizontally polarized pump laser (with an 80 MHz repetition rate and a 140-fs pulse duration) to state |ψ_p⟩=√(5/6)|H⟩+√(1/6)|V⟩, where H and V represent the horizontal and the vertical polarization, respectively. The pump photon is then split into two photons after pumping two crossed-axis type-I β-Barium Borate (BBO) crystals in the SPDC process, transforming the state into |ψ_p⟩→√(5/6)|HH⟩+√(1/6)|VV⟩. By passing through the Beam Displacers (BDs) BD1 and BD2, the down-converted photons' H-(V-) components are directed to path u (l). And for path mode u, we have the mode labeled as H_u and V_u. By re-encoding |H⟩_u→|0⟩, |V⟩_l→|1⟩, and |V⟩_u→|2⟩, we obtain the hyper-entangled state |ψ_s⟩=√(5/6)|H_uH_u⟩+√(1/6)|V_lV_l⟩→√(5/6)|00⟩+√(1/6)|11⟩.
It is worth noting that all the four states |V_i⟩ in Eq. (<ref>) can be generated by performing local
operations on the state |ψ_s⟩,
|V_1⟩=U_2⊗𝕀|ψ⟩,
|V_2⟩=U_3⊗ U_1|ψ⟩,
|V_3⟩=𝕀⊗ U_3|ψ⟩, |V_4⟩=U_1⊗ U_2|ψ⟩,
where
U_1= (
0 1 0
1 0 0
0 0 1
),
U_2= (√(1/5) 0 √(4/5)
0 1 0
√(4/5) 0 -√(1/5) ),
U_3= (
-√(1/5) 0 √(4/5)
0 1 0
√(4/5) 0 √(1/5) ).
For the states |V_3⟩ and |V_4⟩, it also works by applying the unitary U_3⊗𝕀, and U_2⊗ U_1, respectively, and then exchanging the labels for the two detectors D1 and D2. Therefore, through performing the operator U_3 or U_2 on one photon of a pair and the operator U_1 or 𝕀 on the other photon simultaneously, the state |ψ_s⟩ will be transformed to each of the four states |V_i⟩. The switches between these operators are implemented by the motorized rotating HWPs and Quarter-Wave Plates (QWPs), which are controlled by the pseudo-random numbers generated from a classical computer. Two adjustable LED lights are placed before the detectors to introduce the different levels of white noise into the system.
In the measurement part, a QWP and an HWP located at path u are used to analyze the correlations between basis elements |0⟩ and |2⟩, and now the afterward BD works as a Polarization Beam Splitter (PBS). When measuring the superposition of basis elements |0⟩ and |1⟩, as well as |2⟩ and |1⟩, we first convert the path d.o.f. to the polarization d.o.f. via the wave plates and BDs, and then analyze with the combination of the QWP and the HWP. Detailed settings of the wave plates for standard quantum state tomography are given in Tab. <ref> of Appendix B. For each measurement basis, we randomly change the photon states to every one of the four states |V_i⟩. The two-photon coincidence counts are recorded per 10 s.
When it comes to measuring the randomized correlations, as elaborated in the theoretical framework, two distinct approaches are considered. The first one involves conducting local randomized measurements, while the second entails the direct application of Pauli operators or Gell-Mann matrices. In this study, we thoroughly examine and contrast these two methodologies for three-qubit states, utilizing a LabVIEW program to facilitate the automation of numerous measurements. Further details regarding the randomized measurement techniques can be found in the Appendix C. For the bound entangled states, we opt to directly measure the 81 combinations of Gell-Mann matrices to avoid the systematic errors that may emerge from the construction of 3× 3 random unitaries.
§ RESULTS
§.§ Results for the GHZ-W mixed states
In our experiment, a set of GHZ-W mixed states ρ(g) with step size 0.05 is prepared. For each state, 4000 measurements in randomized directions are performed, and for each measurement, about 5300 copies of the state are detected.
The entanglement criterion of Eq. (<ref>) is calculated from the randomized measurement data with the error bars obtained by repeating the whole process ten times. From the results in Fig. <ref>(a), we see that for 0≤ p≤ 0.2 and 0.7≤ p≤ 1, the criterion in Eq. (<ref>) is violated, while the criterion A_3-3≤ 0 is not. Clearly, Eq. (<ref>) improves the previous one.
Note that the sector length A_k can also be expressed in terms
of the coefficients α_ijk, and then compared with
the randomized measurements. Resorting to the standard
quantum state tomography process, we obtain the density matrix of the GHZ state ρ_GHZ^exp and W state ρ_W^exp, respectively.
The values of the criterion of Eq. (<ref>) are calculated from the state ρ(g)=gρ_GHZ^exp+(1-g)ρ_W^exp and plotted as the dashed red lines in Fig. <ref>(a) and (b).
In contrast, for the ideal states, we have (A_1, A_2, A_3)=((1-g)^2/3, 8g^2-8g+3, 4g^2+11(1-g)^2/3), and the theoretical values of the criteria are shown as the solid red lines in Fig. <ref>.
We see that the results deduced from randomized measurements and from the coefficients α_ijk are approximately identical, providing evidence for the correct implementation of the randomized measurements. In the region 0.08≤ g≤ 0.24 and 0.67≤ g≤ 0.88, where the criterion A_3-3≤ 0 fails, we detect genuinely multi-partite entanglement. Furthermore, from
Fig. <ref>(b), we see that our criterion still works for g≤ 0.24 in the violet color region where the states have no three-tangle and also for g≥0.67 in the light salmon region where they exhibit no squared concurrence.
§.§ Results for the chessboard state
The experimentally prepared chessboard state ρ_ch^exp is reconstructed using the maximum-likelihood algorithm. Due to imperfections, when no white noise is added, the minimal eigenvalues of the partially transposed (PT) density matrix is -0.0133, such that state is not PPT and probably not bound entangled. To remove these negative eigenvalues, we introduce different levels of white noise between p=0 and p=0.22 in the experiment, and plot the minimum PT eigenvalue and the violation of the entanglement criterion in Eq. (<ref>) in Fig. <ref>. In particular, for the state with noise level p=0.1291, the minimum PT eigenvalue equals 0.0026±0.0009 and the fidelity between the experimentally prepared state ρ_ch^exp and the the noisy chessboard state ρ_ch(p=0.1291) is given by F(ρ_ch,ρ_ch^exp)=tr(√(√(ρ_ch)ρ_ch^exp√(ρ_ch)))=0.9893± 0.0012.
Next, we show that the state is entangled by using the tool of the second and fourth moments. For the state under consideration at p=0.1291, the second moment is given by ℛ^(2)_AB=0.2355±0.0015, and the fourth moment by ℛ^(4)_AB=0.0259±0.0003, while for separable states, the lower bound on the fourth moment is given by 0.0277 for ℛ^(2)_AB=0.2355 when performing the optimization program in Eq. (<ref>). We see that the experimental value 0.0259 is smaller than the lower bound 0.0277 and violates it with 6 standard deviations. Therefore, we experimentally prepared a 3×3 bound entangled state with the photonic platform and analyzed its entanglement property via the second and fourth moments
successfully.
§ CONCLUSION
We experimentally produced a variety of genuinely entangled photonic states consisting of entangled photon pairs amended with path degrees of freedom and characterized them using methods based on locally randomized measurements. First, we showed how to generate genuinely entangled states of three parties and verified them using entanglement criteria based only on the second moments of the randomized measurements. The latter enabled the verification of mulitpartite entanglement in regimes where well-known measures of multipartite entanglement, i.e., the three-tangle or the squared concurrence, are zero. Further on, we demonstrated the production of weakly bound entangled chessboard states of two qutrits and used entanglement criteria based on the second and fourth moments of the taken randomized measurements to analyze the produced states. As a result, bound entangled states with mixed-state fidelities beyond 98% were successfully produced and verified.
Our work demonstrates the outstanding control of quantum states
in photonic setups and presents an efficient way for preparing a
low-rank bound entangled state. By incorporating appropriate white
noise, the setup demonstrates increased robustness against
transitioning into the free entangled region. Compared with several previous experiments, the precise control allowed us to directly verify
bipartite bound entanglement in minimal case of a 3×3 system,
without resorting to the various forms of bound entanglement in higher dimensions or in multiparticle systems. This will facilitate further exploration of interesting entanglement effects in
experiments.
§ ACKNOWLEDGEMENTS
We thank Xiao-Dong Yu for discussions. The work in USTC is supported by the National Natural Science Foundation of China (Nos. 11821404, 11734015, 62075208), the Fundamental Research Funds for the Central Universities (Nos. WK2030000061, YD2030002015), and the Innovation Program for Quantum Science and Technology (No. 2021ZD0301604). Y.Z. is support by the Major Key Project of PCL. S.I. and O.G. are supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, project numbers 447948357 and 440958198), the Sino-German Center for Research Promotion (Project M-0294), the ERC (Consolidator Grant 683107/TempoQ), and the German Ministry of Education and Research (Project QuKuK, BMBF Grant No.
16KIS1618K). S.I. acknowledges the support from the DAAD. N.W. acknowledges support by the QuantERA project QuICHE via the German Ministry of Education and Research (BMBF
Grant No. 16KIS1119K).
§ APPENDIX A: EXPERIMENTAL DETAILS ON THE PREPARATION OF THE GHZ-W MIXED STATES
In our experiment, the GHZ-W mixed states are prepared using the setup shown in Fig. <ref>, and the switch between the GHZ state and W state is realized by engineering the polarization-entangled photon source (EPS), and the subsequent unitary transformations constituted by Beam Displacers (BDs) and the Half-Wave Plates (HWPs). First, for the GHZ state, a polarization-entangled state |ψ_s⟩=1/√(2)(|HH⟩+|VV⟩)|l⟩ is generated through the type-I Spontaneous Parametric Down-Conversion (SPDC) process, and |u⟩ (|l⟩) in Fig. <ref> represents the path u (path l). Then, BD1 makes the vertically polarized part of the light passes through directly to path l, while the horizontal component passes with a 4 mm deviation to path u. That is to say, the BD1 performs as a CNOT gate with the polarizations as the controlled qubit and the path as the target qubit. When we set the angles of the half-wave plates H4∼H5 as 0^∘ and H6∼H7 as 45^∘, we get |ψ_s⟩→1/√(2)(|HH⟩|u⟩+|VV⟩|l⟩). By encoding the H (u) and V (l) to the logic qubit 0 and 1, we prepare the system into the three qubit GHZ state |GHZ⟩=1/√(2)(|000⟩+|111⟩).
When it comes to the W state, the EPS is tuned to the state |ψ_s⟩=1/√(3)|VH⟩|l⟩+√(2/3)|HV⟩|l⟩ by rotating the polarization directions of the pump beam to |ψ_p⟩=1/√(3)|H⟩+√(2/3)|V⟩ and performs a bit flip operation on one of each paired photon generated in the SPDC process. Now the angle of H4 is placed at -67.4^∘ and the one of H5 at 45^∘ to transform the state |V⟩|l⟩ to 1/√(2)(|V⟩|u⟩+|H⟩|l⟩), and |ψ_s⟩→1/√(3)|VH⟩|u⟩+1/√(3)|H⟩|V⟩|u⟩+1/√(3)|H⟩|H⟩|l⟩. With re-encoding, the W state |W⟩=1/√(3)(|100⟩+|010⟩+|001⟩) is generated.
At last, various states ρ(g)=g|GHZ⟩⟨GHZ|+(1-g)|W⟩⟨W| are generated by randomly switching the settings of the setup to produce state |GHZ⟩ or |W⟩, with probabilities g and 1-g, respectively.
In the measurement stage, the combination of a Quarter-Wave Plate (QWP), an HWP, and a Polarization Beam Splitter (PBS) enables the polarization state measurement in an arbitrary basis. Thus, the two polarization encoded qubits are analyzed with the devices boxed as parts (a) and (b), respectively. Here BD3 combined with H8 performs as a PBS with only one output port, so we must rotate Q2 and H2 twice to realize the projective measurements {U|0⟩⟨0|U^†, U|1⟩⟨1|U^†}. The third qubit, i.e., the path qubit, is transformed to the polarization degree of freedom, and then analyzed by wave plates Q3, H3, and PBS2 in the boxed part (c).
To facilitate the massive randomized measurements, i.e., 40,000 sets for each state ρ(g) in our experiment, the QWPs Q1∼Q3 and HWPs H1∼H3 are all mounted in Motorized Rotation Mounts (Newport, CONEX-PR50CC). For each local measurement setting drawn uniformly at random, a classical computer inputs the corresponding settings of the QWP and HWP and controls the wave plates automatically rotated to the target angles to perform the measurement. This entire process is executed via a LabVIEW program.
Here the quality of the state ρ(g) depends heavily on the GHZ state and the W state, so we give the benchmarks of these two states through quantum state tomography. We estimate the fidelities of the experimentally prepared state and the ideal state F(ρ^ideal,ρ^exp)=(tr√(√(ρ^ideal)ρ^exp√(ρ^ideal))) are 0.9919 and 0.9890 for GHZ state and W state, respectively. The real parts of the experimentally prepared state are shown in Fig. <ref>. All fidelities of the GHZ-W mixed states shown as the dots in Fig. <ref> are above 0.9836, which shows the good performance of the setup. The error bars are of the size of about 0.0001, which is obtained with Monte Carlo simulations by sampling the experimentally collected data.
§ APPENDIX B: QUANTUM STATE TOMOGRAPHY FOR THE CHESSBOARD STATE
As the red points in Fig. <ref> show, various noisy chessboard states ρ_ch(p) are prepared to study their entanglement properties. Here, the level of white noise p is estimated by comparing the total coincidence counts with the counts recorded when no white noise source is added, i.e., when the LED lights in Fig. <ref> are turned off. For instance, if we record a total of photonic counts N_p for state ρ_ch(p) and N_0 for state with no added white noise, then p is set to the value of 1-N_0/N_p.
To characterize the chessboard state that we prepared experimentally, we perform a standard quantum state tomography process, where the 81 vectors
|u_i⟩⊗|u_j⟩ (i, j=0,1,...8) are measured. The detailed forms of the kets |u_i⟩ are given by
|u_0⟩=|0⟩;
|u_1⟩=|1⟩;
|u_2⟩=|2⟩;
|u_3⟩=(|0⟩+|1⟩)/√(2);
|u_4⟩=(|0⟩+i|1⟩)/√(2);
|u_5⟩=(|1⟩+|2⟩)/√(2);
|u_6⟩=(|1⟩+i|2⟩)/√(2);
|u_7⟩=(|0⟩+|2⟩)/√(2);
|u_8⟩=(|0⟩+i|2⟩)/√(2).
Each basis is realized with the settings in Tab. <ref>.
We get the fidelities 0.9835±0.0005, 0.9838±0.0006, 0.9853±0.0005, 0.9893±0.0012, 0.9911±0.0005, 0.9930±0.0003 for states of p=0,0.052,0.0991,0.1291,0.1573,0.2158, respectively. The error bars are estimated with Monte Carlo simulations by sampling the experimental data 100 times.
§ APPENDIX C: ENTANGLEMENT DETECTION FOR THREE-QUBIT STATES WITH RANDOMIZED MEASUREMENTS
In our work, we use the criterion based on the second moment,
ℛ_S^(2) = ∫dU_1 …dU_n ⟨ U_1 τ_1 U_1^†⊗…⊗ U_n τ_n U_n^†⟩_ρ^2,
to study the entanglement property of the three-qubit state ρ(g), where τ_i=σ_z for i∈ S and τ_i= for i∉ S.
As each observable τ_i is measured in the standard basis |0⟩ and |1⟩, we will sort the detection outcomes into eight categories corresponding to the eight basis states M_ABC={|000⟩⟨000|, |001⟩⟨001|, |010⟩⟨010|, |011⟩⟨011|,
|100⟩⟨100|, |101⟩⟨101|, |110⟩⟨110|, |111⟩⟨111|}, respectively. In every single trial, instead of preparing the state ρ_U=Uρ(g) U^† and then making measurements in the standard basis, we directly perform the measurements U^† M_ABCU on the state ρ(g) in our experiment, where U=U_A⊗ U_B⊗ U_C. These two ways are equivalent to each other.
For each choice of local unitaries, we prepare N copies of the state to estimate the probability distributions of the outcomes, and a total of M random unitaries are applied to form the average over local unitaries.
We note that given the observable τ_i we choose, there are only two possible outcomes X_i∈{1, -1} for τ_ABC=τ_1⊗τ_2⊗τ_3. We define the probability for each outcome as p_i, which can be obtained by summing up the probabilities that correspond to the same measurement outcomes. As an example, consider the moment ℛ_A^(2), then τ_1=σ_z, τ_2=, and τ_3=, the outcomes assigned to the eight basis states M_ABC are 1,1,1,1,-1,-1,-1,-1, respectively. We get the probabilities p_1=
p_000+p_001+p_010+p_011 and p_2=p_100+p_101+p_110+p_111, where {p_1, p_2} represents the probability distribution for outcomes {1, -1}, and p_000=⟨ 000|ρ_U|000⟩ etc.
Next, we need to construct the unbiased estimator for Tr(ρ Uτ_ABCU^†)^2. For N independent trials, we get the unbiased estimator p_i=N_i/N so that 𝔼[p_i]=p_i, where N_i are the number of events with measurement outcome X_i. Also, we can find the unbiased estimators p_i^2 and p_ip_j such that 𝔼[p_i^2]=p_i^2 and 𝔼[p_ip_j]=p_ip_j:
p_i^2=N(p_i)^2-p_i/N-1
p_ip_j=N/N-1p_ip_j.
We get the unbiased estimator for E^2=Tr(ρ_Uτ_ABC)^2 via
E^2=∑_i X_i^2p_i^2+2∑_i<jX_iX_jp_ip_j.
For each of the M local unitaries and the observable τ_ABC, we have
E^2=N(p_1)^2-p_1/N-1+N(p_2)^2-p_2/N-1-2N/N-1p_1p_2.
After averaging over all the randomly chosen local unitaries, we get the estimate of the moments R_S^(2) as
R_S^(2)=1/M∑_i^M E^2
Finally, we combine the second estimates for the same size |S|=k to get the k-sector length of the state and plug it into the criterion to perform the entanglement analysis.
|
http://arxiv.org/abs/2307.03988v1 | 20230708143306 | PCG-based Static Underground Garage Scenario Generation | [
"Wenjin Li",
"Kai Li"
] | cs.AI | [
"cs.AI",
"cs.RO"
] |
Journal of Class Files, Vol.
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals
PCG-based Static Underground Garage Scenario Generation
Wenjin Li, Kai Li
Wenjin Li, Kai Li are with the Department of Computer Science and Technology, Southern University of Science and Technology, Shenzhen, 518055, China
August 12, 2023
============================================================================================================================================================================
Autonomous driving technology has five levels, from L0 to L5.
Currently, only the L2 level (partial automation) can be achieved, and there is a long way to go before reaching the final level of L5 (full automation).
The key to crossing these levels lies in training the autonomous driving model.
However, relying solely on real-world road data to train the model is far from enough and consumes a great deal of resources.
Although there are already examples of training autonomous driving models through simulators that simulate real-world scenarios, these scenarios require complete manual construction.
Directly converting 3D scenes from road network formats will lack a large amount of detail and cannot be used as training sets.
Underground parking garage static scenario simulation is regarded as a procedural content generation (PCG) problem.
This paper will use the Sarsa algorithm to solve procedural content generation on underground garage structures.
Automated driving, underground garage planning, reinforcement learning, procedural content generation, Sarsa
§ INTRODUCTION
According to a recent technical report by the National Highway Traffic Safety Administration (NHTSA), 94% of road accidents are caused by human errors <cit.>. Against this backdrop, Automated Driving Systems (ADSs) are being developed with the promise of preventing accidents, reducing emissions, transporting the mobility-impaired, and reducing driving-related stress <cit.>.
Autonomous driving simulation is an important part of ADSs.
However, simulation lacks interactive and changeable scenarios <cit.>. Researchers are still using authentic human-made ways to build one scenario for huge training.
Procedural Content Generation for Games (PCG-G) is the application of computers to generate game content, distinguish interesting instances among the ones generated, and select entertaining instances on behalf of the players <cit.>.
In our project, we consider the underground garage as the game content that should be generated.
The problem can normally be divided into three parts. The first part is to create the digit grid map for each type of floor, as a PCG task. The second part is to convert each type of floor to the design diagram.
The last part is to simulate the whole 3D scenario map depending on the design diagram.
To simplify the simulation, we combine the last two parts as one part.
In reinforcement learning <cit.>, an agent seeks an optimal control policy for a sequential decision-making problem.
We regard the first part as a sequential decision-making problem.
Markov decision processes (MDPs) are effective models for solving sequential decision-making problems <cit.> in uncertain environments.
The agent's policy can be represented as a mapping from each state it may encounter to a probability distribution over the available actions <cit.>.
Generalized policy iteration (GPI) was demonstrated as a class of iterative algorithms for solving MDPs in <cit.>.
It contains policy iteration (PI) and value iteration (VI) as special cases and has both advantages of PI and VI.
Temperal-difference <cit.> is the specific implementation of GPI <cit.>.
TD methods are guaranteed to converge in the limit to the optimal action-value function, from which an optimal policy can be easily derived.
A classic TD method is Sarsa <cit.>.
The on-policy algorithm, in which policy evaluation and policy improvement are identical, has important advantages.
In particular, it has stronger convergence guarantees when combined with function approximation, since off-policy approaches can diverge in that case.
In this paper, we use the Sarsa algorithm to create a digit grid map.
Simulation is an important step during the conversion <cit.>.
We consider the simulator can generate test scenarios automatically, including static buildings, dynamic traffic flow, and real-time calculated lighting and weather.
This paper aims to solve the static scene generation problem.
§ RELATED WORK
Abdullah <cit.> compared the space utilization efficiency of diagonal, parallel, and perpendicular parking methods and concluded that perpendicular parking methods have the highest number of spaces, using a university as a specific example.
Sawangchote <cit.> developed a heuristic algorithm for the layout of parking spaces in small-scale garages based on the space utilization of different parking methods;
Xu Hanzhe <cit.> carries out a parking space layout design based on a greedy algorithm to study the influence of irregular contours and obstacles on the layout of parking spaces and get the layout plan with the most number of parking spaces.
Julian Togelius <cit.> finds that the result of the composition of a bunch of different algorithms is better than the result of any single algorithm and He used answer set programming to do procedure content generation.
Huimin Wang <cit.> has previously proposed a model-based reinforcement learning algorithm to implement the path planning problem. The path planning problem has similar features when it applies to the specialized PCG problem. We consider that generation on a garage can use the method of path planning on agent moving. Besides, Arunpreet Sandhu <cit.> comes up with the WFC algorithm to generate similar images.
Akatsu <cit.> provides an idea for evaluating underground garage structures by feeding a series of indicators obtained from a realistic traffic survey into a modeled underground garage structure to obtain a series of evaluation results.
§ METHODOLOGY
§.§ Overall
We consider dividing the underground garage construction into two main parts, PCG task and simulation. Notations using throughout this report are as follows:
Since the most important thing in static underground garage scenario generation problems is the planning of parking stalls. For parking space planning problem, it is essentially an optimization problem of object placement, the objects to be placed will have the following distinction:
* static object: object's position will not change after confirming the position
* dynamic object: objects can wait for further optimization after confirming the position of static objects
Now we only need to consider the dynamic object distribution, in order to better describe the entire underground garage object planning situation, here we rasterize the underground garage by using three matrices S_i,j, R_i,j, C_i,j to describe the state of an underground garage.
In this paper, we will use reinforcement learning to plan the distribution of dynamic objects, by combining the distribution with the distribution of static objects to obtain the S_i,j as the result of parking space planning, and finally combine the R_i,j and C_i,j as the plane structure of the static underground garage to pass into the Unity3D engine for 3D modeling to finally generate the static underground garage scenario.
We provide the following requirements for a reliable garage:
* Reality: The generated basement structure needs to adapt to real-world standards (such as national standards and regulations)
* Feasibility: Ensure that at least one route to any exit and entrance can be found for each parking space arranged in the basement structure
* Randomness: The structure and contour of the basement are randomly generated, and the solution generated each time will change according to the change of the random process
* Bijection: Each generated basement structure has a unique corresponding random process, and this random process must correspond to a unique basement structure
* Customizability: The structure of the basement can be self-defined
§.§ Static objects generation
First, we give a definition of structure matrix 𝒮(i,j):
𝒮(i,j)={
0 , parking space or free space
-1 , obstacle
1 , lane
2 , entrance
3 , exit
.
At the beginning of getting this matrix, we should confirm the location of those static objects, which can be divided into three steps: contour generation, entrance and exit generation, and obstacle generation.
First, we need to generate the contour of the underground garage. Divide a w× h rectangle into w× h blocks and each block has a width and height of 1. We consider generate n groups of 2n points in this rectangle and use the line of two points of each group as the diagonal of the rectangle to generate a rectangle and then after expand all rectangles to its corresponding squares, We will treat the concatenation of all rectangles as a generated underground garage contour. The following algorithm shows the generation of underground garage contour.
After contour generation, we can get all squares in the floor plan, which mean we get ζ and ψ and then assign values to all those squares in ζ and ψ:
𝒮(ζ) = 0
𝒮(ψ) = -1
Secondly, we need to determine the position of the entrance and exit. After contour generation, in order to generate a reliable position of entrance and exit, we give a definition of ξ and η. A frontier square needs to satisfy the following conditions:
𝒮(ξ) = 0
∑_i=1^8𝒮(ρ_ξ) < 0
An inner square needs to satisfy the following conditions:
𝒮(η) = 0
∑_i=1^8𝒮(ρ_η) = 0
Since entrances and exits can only be generated in ξ and cannot be generated on the corners of ξ, in this condition, we only generate entrance and exit on those squares satisfy the following condition:
ϵ∈ξ
∑_i=1^8𝒮(ρ_ϵ) = -3
M(ϵ_i,ϵ_j) ≥σ_1
Thirdly, we need to consider the position of obstacles in this underground garage. We only generate obstacles on those squares satisfying the following conditions:
o ∈η
M(o_i,o_j) ≥σ_2
§.§ Reinforcement Learning
Reinforcement learning (RL) is a basis to solve our PCG problem. In this paper, we first focus on finite Markov decision processes (finite MDPs).
A finite Markov decision process can be
represented as a 4-tuple M = {S, A, P, R}, where S is a
finite set of states; A is a finite set of actions; P : S× R × S × A → [0, 1] is the probability transition function; and R : S × A →ℛ is the reward function. In this paper, we denote the probability of the transition from state s to another state s' when taking action a by P(s', r|s, a) and the immediate reward received after the transition by r_s^a <cit.>.
A policy is defined as a mapping, π: S× A→ [0,1]. In this paper, we use π(s) to represent the action a in state s under the policy π. To measure the quality of a policy, action-value function, q_π(s, a) is used to estimate the expected long-term cumulative reward of taking action a in state s under a policy π. It is formally defined as:
q_π(s,a)=𝔼_π[∑_k=0^∞γ^kR_t+k+1| S_t=s, A_t=a]
where γ is a discount factor, R_t is the reward at time-step t, and E_π is the expectation with respect to the policy π.
The goal is to find an optimal policy π_* which maximizes the expectation of long-time discounted cumulative reward from any starting state s∈ S:
π_*=*argmax_πE_π [∑_t=0^∞γ^t R_t|s_0=s]
In this paper, we format PCG as an optimization problem <cit.>, which is represented as a 2-tuple (M, E ), where M is finite MDPs which can generate one 2D integer array and E is an evaluation function which evaluates the quality of array. We have one agent with policy π. It will tack action in state s and send a message to the environment.
The environment receives the message and changes the state to the next state and sends rewards to the agent.
Finally, the agent and environment produce a finite Markov decision array:
S_0,A_0, R_1, S_1, A_1, R_2, S_2, A_2, R_3,…, S_T-1, A_T-1, R_T
where T is the termination time. Evaluation function E is calculated from M
E=∑_t=1^T-1 R_t
R_T is always a negative value and it is not included in E. In other words, we come back to the previous unfailed state to compute E.
Generalized policy iteration (GPI) contains two processes, policy evaluation (E) and policy improvement (I):
π_0E→ q_π_0I→π_1E→ q_π_1I→π_2E→…I→π_*E→ q_*
where q_π_i is action value function under π at episode i. The process is terminated when q and π converges to q_* and π_*. For Sarsa algorithm, policy evaluation and policy improvement are carried out simultaneously in each episode.
The agent and environment in MDP are clear. Our design is divided into two sections.
In the first section, we design the MDP for our PCG task. In the other section, we design the environment penalty based on the principle of parking lot design.
§.§ Sarsa
We use the Sarasa algorithm to solve the PCG task. First, we define the parameters of MDPs. We consider a car in a 2D place as an agent to perform a colouring task, which colours the undefined square to a lane spuare. Agent's state at timestamp t is defined as the multiple dimensional vectors:
S_t=(D, M, A_t-1)
Where D is a 4-dimensional vector that each element point to the distance between the free space, border, or obstacle and agent in the direction, M is a 25-dimensional vector that symbols to the perception range of the agent. It satisfies that all points have a Manhattan distance of less than 2 from the agent.
The agent takes action from the action set
A={UP, DOWN, LEFT, RIGHT, STAY}
The goal is to colour the road as much as possible until it comes back to the start and takes action STAY, leading to a terminate state. Agent receives rewards depending on the increment of the number of parking spaces. The agent also receives a penalty for some wrong actions.
To evaluate one policy π, we predict one Markov decision array containing S, A, R for each episode. We update q(S_t, A_t) during the prediction, following the function:
q(S_t, A_t) = q(S_t, A_t) + α× (R_t+1 + γ× q(S_t+1, A_t+1)-q(S_t, A_t))
where α and γ are parameters, with 0≤α, γ≤ 1.
We use greedy method to improve one policy:
π(s)=*argmax_a q(s,a)
where π(s) is the greedy action under policy π. We consider using ϵ-greedy to take action, where the agent has ϵ chance of taking greedy action with maximum value otherwise taking action equivalently. The probability of taking greedy action π(s) in state s is:
p(s, π(s)) = (1-ϵ)+ϵ/|A|
§.§ Penalty design
The principle of parking lot design has been proposed for optimizing parking area space.
* Use rectangular areas where possible
* Make the long sides of the parking areas parallel
* Design so that parking stalls are located along the lot's perimeter
* Use traffic lanes that serve two rows of stalls
<ref> conforms the above principle, where green square refers to lane square, orange square refers to parking square or free square, and white square refers to entrance or exit. Contrary to <ref>, <ref> has many problems: no cycle, existing non-rectangular and non-parallel areas, and many lanes serving only one row of the stall.
The agent can not only receive a reward after the action but also a certain penalty we defined. The reasonable penalty guides agents to do actions they want. Based on the design principle, we propose several penalties below:
* Turn-back penalty when the agent takes the opposite action from the last action.
* Interval penalty based on the interval of the same actions.
* Wheeling penalty at an improper position with a certain direction.
* Step penalty for each timestamp to prevent agents from cycling consistently.
§.§ Convert matrix to simulated underground garage
After generating structure matrix 𝒮(i,j), we need to convert this matrix to a simulated underground garage. Here we first atomize the elements of the matrix, we define the below equation:
n = ∑_i=1^4𝒮(θ_η)
and for any square η, if:
𝒮(η) = 1
we define η as:
η={
Crossroads , n = 4
T-Junctions , n = 3
Straight road , n ≤ 2
.
and if:
𝒮(η) = 0
we define η as different types in Figure 2:
η={
Type1 , n ≥ 3 or across n = 2
Type2 , adjacent n = 2,
Type3 , n = 1
Type4 , n = 0
.
Then, we only need to model each type of square η in the simulator and use scripts to construct the simulated underground garage.
§.§ Construction of underground garage structure
We know that autonomous vehicles typically use multiple types of sensors to collect and process environmental information to support the vehicle's decision-making and control systems <cit.>.
The parking garage structure we generate is intended to provide training scenarios for autonomous vehicles, and the information collected during autonomous vehicle training comes from the simulated scenes, such as the lighting of light sources, the materials of various object surfaces, and information on the different light reflections of objects in the scene, and so on <cit.>. If we can better simulate the various objects in these scenes, the amount of information contained in the overall static parking garage scene will be greater, and it will better provide training data for autonomous vehicles, achieving better training effects.
The construction details of a static underground parking garage mainly include object surface texture mapping, such as:
* Lane marking texture mapping
* Wall texture mapping
* Floor texture mapping
* Lighting texture mapping
As well as collision bodies in the underground parking garage, such as:
* Column mesh collision body
* Speed bump collision body
* Parking barrier
And here we give the detailed procedure of underground garage generation in Unity3D:
* The structure matrix 𝒮_(i,j) previously generated by using reinforcement learning is used as the generated underground structure, and the R_(i,j) and C_(i,j), which define the length and width of each plot of land in reality, are passed as input into Unity3D engine.
* In the Unity engine, each different state of the land is first modeled, and then the entire underground plane is automatically generated based on the arrangement of elements in the specific structure matrix.
* After generating the plane, three-dimensional information such as walls, pillars, ceilings, obstacles, etc. are further generated based on the outline of the underground structure.
* According to the generated structure, more detailed descriptions are made, such as light tubes, ventilation ducts, and other underground details.
* According to the demand, some objects that may appear underground, such as parked vehicles and no parking signs, are randomly generated.
§ EXPERIMENTAL SETUP
§.§ Evaluation
After generating the underground garage structure, we need to evaluate it, but there is no unified and credible standard for the evaluation function. So we proposed the following three dimensions to describe the value of the underground garage structure by combining the evaluation system of several papers:
* the number of the parking spot
* the average parking time
* the number of unused squares
So the evaluation function is like:
y^' = k_1 * N_S + k_2 * T_S + k_3 * U_S
To obtain the proportion of weights accounted for by each of these three criteria, here we assume that there exists a corresponding evaluation function for a certain underground garage structure, and the value distribution of all solutions for that structure is roughly Gaussian distributed.
Based on this, we can know that if we have enough sampling points and judge the value size relationship of the structure in the sampling points, we can correspond these sampling points to the Gaussian distribution curve one by one, and then make the estimated value order of the sampling points the same as before by adjusting the weights of our evaluation function, so that we get an evaluation function with a certain degree of confidence, and when more and more points are sampled, the final evaluation function will be more credible.
Here, we sampled a series of more representative experimental results and derived the above values for the three coefficients:
y^' = N_S + (-5) * T_S + (-1) * U_S
We conducted a 5000-episode cycle test for Sarsa algorithm with one garage contour. For each episode, we save the matrix and evaluation on it to the dictionary. In the end, we select top 200 matrix with high evaluation function value.
§.§ Simulation of Underground Garage
The main hardware devices used in the simulation to generate the underground garage scenario are: CPU: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz, GPU: NVIDIA GeForce GTX 1650 and the software are: Unity3D 2021.3.5f1c1, Visual Studio 2022
§ RESULTS
§.§ Sarsa Result
<ref> indicate that the agent easily achieves the local limit at episode 400. Then it straight down to a small value. It maintains a trend of first converging to the limit and then sharply decreasing. It will keep searching for a solution if the test doesn't stop.
However, we observed that as the number of episodes increases, there are instances where the agent obtains lower payoffs. This can be attributed to the ϵ-greedy strategy, which sometimes leads the agent directly to the termination state. To increase the converge rate, We make the ϵ decrease slowly. We also refresh the value of ϵ if the matrix keeps at 100 consecutive episodes.
<ref> shows the matrix with the highest evaluation value during the test. It is slightly inferior to <ref> and <ref> manually constructed.
§.§ Simulated Underground Garage
<ref> shows the underground garage model simulated by modelling the structure matrix generated by the above reinforcement learning algorithm for 3000 iterations as input.
<ref> shows the underground garage model simulated by modelling the structure matrix generated by the above reinforcement learning algorithm for 3000 iterations as input.
§ DISCUSSION
For the evaluation function, there is no unified credible evaluation function, and the coefficient given in this paper is only a fitting operation for the real value curve. At the same time, since the structure of an underground garage with different contours has an impact on the three evaluation indexes we selected, the value of the coefficients for different contours may also be inconsistent, which may require more sampling and training through neural networks to come up with the coefficients for each underground garage contour later <cit.>.
However, happily, we were able to correctly evaluate the generated underground garage parking space structure according to the evaluation function obtained from the sampling on the 7*9 square contour, as it can be seen that Fig. 5 and Fig. 6 are the manually designed structures considered to be of higher value according to the cognitive design, and Fig. 1 to Fig. 4 are the top four structures of value filtered according to the evaluation function from the results of the algorithm generating 5000 episodes, and it can be seen that the filtered structures, although not perfect, can meet several of the most basic requirements in designing an underground garage parking space, and are indeed a little more valuable than the manually designed structures.
§ CONCLUSIONS
Sarsa, an on-policy TD algorithm, performs well in this paper. It can generate reliable graphs eventually. However, the state set is so large that it can not converge into one solution that reaches the highest repayment.
This study demonstrates the feasibility of using reinforcement learning to programmatically generate underground garage grid maps. We have yet to reach a target that can generate a reliable underground garage based on some contour. PCG of underground garage design has a long way to go.
In terms of simulation, we are currently able to construct the corresponding 3D underground parking garage and the generated garage has certain details: real-time lighting, ventilation ducts, column network structure, etc.. The current garage details such as various pipe layouts are not yet practical and various scene elements can be further rendered to achieve a more realistic effect. This will allow us to further enhance the accuracy and reliability of the generated underground garage maps. These findings provide valuable insights for the development of intelligent underground garage planning and design tools.
In the future, we will extend this work with other AI technologies, such as classification <cit.>, knowledge graphs <cit.>, deep learning <cit.>.
IEEEtran
|
http://arxiv.org/abs/2307.04494v1 | 20230710113346 | Enabling Faster Locomotion of Planetary Rovers with a Mechanically-Hybrid Suspension | [
"David Rodríguez-Martínez",
"Kentaro Uno",
"Kenta Sawa",
"Masahiro Uda",
"Gen Kudo",
"Gustavo Hernan Diaz",
"Ayumi Umemura",
"Shreya Santra",
"Kazuya Yoshida"
] | cs.RO | [
"cs.RO"
] |
RCS-based Quasi-Deterministic Ray Tracing for Statistical Channel Modeling
Javad Ebrahimizadeh, Evgenii Vinogradov, Guy A.E. Vandenbosch J. Ebrahimizadeh and G. Vandenbosch are with WaveCoRE of the Department of Electrical Engineering (ESAT), KU Leuven, Leuven, Belgium. E-mail: {Javad.Ebrahimizade,Guy.Vandenbosch}@kuleuven.be
E. Vinogradov is with ESAT, KU Leuven, Leuven, Belgium, also with Autonomous Robotics Research Center, Technology Innovation Institute (TII), Abu Dhabi, UAE. E-mail: [email protected].
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
empty
The exploration of the lunar poles and the collection of samples from the martian surface are characterized by shorter time windows demanding increased autonomy and speeds. Autonomous mobile robots must intrinsically cope with a wider range of disturbances. Faster off-road navigation has been explored for terrestrial applications but the combined effects of increased speeds and reduced gravity fields are yet to be fully studied. In this paper, we design and demonstrate a novel fully passive suspension design for wheeled planetary robots, which couples a high-range passive rocker with elastic in-wheel coil-over shock absorbers. The design was initially conceived and verified in a reduced-gravity (1.625 m/s^2) simulated environment, where three different passive suspension configurations were evaluated against a set of challenges—climbing steep slopes and surmounting unexpected obstacles like rocks and outcrops–-and later prototyped and validated in a series of field tests. The proposed mechanically-hybrid suspension proves to mitigate more effectively the negative effects (high-frequency/high-amplitude vibrations and impact loads) of faster locomotion (>1 m/s) over unstructured terrains under varied gravity fields. This lowers the demand on navigation and control systems, impacting the efficiency of exploration missions in the years to come.
§ INTRODUCTION
Robots have eased many of the tasks performed in space. They have assisted humans in building habitable labs in low-Earth orbit and have traversed the deserted lands of Mars in the name of science. Upcoming exploration missions and currently road-mapped space activities require, however, robots capable of performing in domains for which new technological innovations are necessary.
The growing interest in exploring the lunar poles serves as a good example. Hydrogen-rich elements and other volatiles have been identified within the surface and subsurface layers of polar regolith <cit.>. The extraction and use of these compounds could prove essential for the long-term sustainable exploration of space. But unlike equatorial regions previously visited, the poles of the Moon harbor extreme terrain elevation changes, day-night temperature fluctuations of more than 300 K, and a large number of regions rarely struck by natural illumination—all while constrained by a sun that at times barely rises above the horizon <cit.>. These features demand faster, more effective, and highly autonomous robotic platforms capable of coping with a wide range of environmental constraints unfaced by previous missions.
§.§ Contributions
In this paper, we present the first prototype of a new fully passive suspension design capable of enabling planetary robots to safely negotiate unstructured terrains at speeds that approach 1 m/s (see fig:ex1)—two orders of magnitude larger than conventional rover speeds. We strive to understand what effects the combination of increasing speeds and a reduced gravity field has on the locomotor performance of rovers while addressing the following questions:
* What is the level of perturbations endured by free-balancing suspensions when facing some of the salient, unavoidable features of the lunar surface at 1 m/s?
* What degree of improvement could be obtained from the addition of passive energy-dissipation devices?
* Which passive suspension configuration provides the best results?
§.§ Background
Passive, inelastic, free-balancing suspensions in 4-to-8-wheeled chassis configurations have been employed by most of the rovers commissioned to explore the Moon and Mars. These suspensions are optimized for supporting and evenly distributing the weight of the rover, allowing it to overcome irregular terrains and obstacles, and mitigating the effects of impacts and vibrations while isolating the sensitive optics and electronics from these unwanted effects. Additionally, the suspensions of planetary robotic platforms are heavily constrained in terms of mass, volume, and power, making the rocker-bogie (RB) suspension <cit.> the most widely used type of suspension design.
First developed in the frame of NASA's Mars Pathfinder mission <cit.>, the RB suspension consists of a mechanism of two linkages (see Fig. <ref>). In the most commonly used configuration, a larger forward linkage called the rocker is fixed to the front wheel at one end and attached to the smaller rearward linkage, the bogie, at the other end through a free-rotating pivot point. Intended for 6-wheel configurations, the middle and rear wheels are each linked to both ends of the bogie. The rockers of both sides are connected together and attached to the chassis through a differential that maintains the body of the rover at a pitch angle equal to the average rotation of the two rockers.
The RB suspension effectively accomplishes the main functions previously described. Irregular topography and obstacles of a size comparable to the wheel diameter can be overcome without losing contact with the ground. NASA's Sojourner, which presented a reversed RB configuration (i.e., bogie facing forward), Mars Exploration Rovers (Spirit and Opportunity), Mars Science Laboratory (Curiosity), the newest Mars-2020 rover (Perseverance), and China National Space Administration's (CNSA) Yutu-2 rover were all designed with an RB suspension. At the same time, these missions have been characterized, however, by one significant limitation: speed.
The demand for rovers capable of operating at speeds much higher than the ones previously considered is rapidly growing: from speeds of just a few cm/s to ones on the order of 1 m/s <cit.>. At the same time, missions continuously required increased levels of autonomy, which consequently means systems need to reliably cope with a higher degree of perturbations. This must be accomplished while maintaining the mechanical simplicity and reliability of the locomotion system and without excessively increasing the rover mass or its power requirements. At this speed level, inertial effects start to dominate the interaction with the ground <cit.>, which together with increased vibrations and impact loads may require the use of energy dissipation devices.
The RB suspension was designed for operational speeds below ∼10 cm/s. At higher speeds, the structural integrity of the suspension and the stability of the robot cannot be ensured. Attempts have been made to broaden the range of applications of RB suspensions by independently controlling the speed of the wheels <cit.> or dynamically adapting the suspension configuration <cit.>. In an effort to find alternative solutions that are better suited to a wider range of environmental conditions and speeds, actively articulated and adaptive suspension designs have been widely discussed and proposed <cit.>. While most of these solutions could be perfectly employed to maximize traversability and to minimize the detrimental effects resulting from high-speed locomotion, most rely on the optimal performance of other systems (e.g., hazard detection and terrain segmentation) or require an additional, non-negligible supply of power (e.g., to operate additional electromechanical actuators).
Fast lunar vehicles are, however, not completely new to the space exploration scene. The Russian lunokhods and NASA's two-crew-piloted Lunar Roving Vehicle (LRV) were capable of traveling at speeds that far exceeded those of present lunar and martian rovers. The lunokhods were driven at a maximum speed of 0.5 m/s, whereas the LRV was reported to have reached a top speed of ∼5 m/s while commanded by Eugene Cernan during Apollo 17 <cit.>. Despite these numbers, the capability to drive faster seemed to be closely associated with their ample power reserves and the direct human input—both vehicles were either directly piloted or teleoperated from Earth—rather than with variations purposely introduced in their suspension designs <cit.>.
The LRV inherited a suspension frequently used in conventional road vehicles with slight modifications. It consisted of an independent double-wishbone suspension with elasticity provided through transverse torsion bars in both upper and lower control arms, in addition to compliant wheel rims; and damping provided through a conventional silicone-oil damper <cit.>. The vertical stiffness of the suspension-wheel combination was 2.4 kN/m <cit.>. With regards to the performance of the suspension, Apollo 16 astronauts reported feeling “quite at home” traveling over the ridges south of the landing site toward Stone Mountain <cit.>. Despite the generally positive attitude of the astronauts, their reports also describe the tendency of the suspension to bounce uncontrollably when traveling over surfaces with a large density of small craters ( 1 m) and the impossibility to steer effectively, i.e., without excessive side slip, at speeds above 1.4 m/s. The barren and subdued lunar landscape and the need to drive at times directly toward the Sun did not make negotiating these obstacles any easier.
On the other hand, Lunokhod 1 (Luna 17) and Lunokhod 2 (Luna 21) were remotely operated rovers weighing ∼800 kg each. The lunokhods were designed with an 8-wheel suspension consisting of four carriers fixed to the bottom of the chassis <cit.>. A pair of rigid wheels with their respective swing arms were attached to each carrier. To deal with the high speeds and the heavy weight of the lunokhods, mechanical loads were dissipated through 3-beam torsion bars attached to the swing arms. No damping device was introduced in the design. Vertical stiffness varied from 8.8 kN/m of the front suspension-wheel combination to 3.5 kN/m of the middle combination. Similar issues to those experienced during the Apollo missions were found. Although these were often associated with the poor illumination conditions of the lunar surface, the limited lookahead distance and deficient feed quality of navigation cameras, and the inexperience of the operators <cit.>.
§ PRELIMINARY ANALYSIS
During the Apollo and Luna missions, data on the suspension performance were never collected. Subjective evaluations on rideability and operability are insufficient to argue in favor of one or another suspension configuration, particularly for its application to autonomous robots. We, therefore, developed a series of simulation modules to understand the relative improvement and potential limitations that the addition of passive energy dissipation devices may introduce when attempting to travel faster—up to a speed of 1 m/s—on reduced-gravity, unstructured environments compared with the performance of conventional rigid suspensions. These simulation modules were run on Coppelia Robotics' simulator CoppeliaSim <cit.> in combination with CM Labs' high-fidelity physics engine Vortex.
§.§ Multibody dynamic model
As the baseline for our comparative analysis, we defined the dynamic model of a 4-wheel, All-Wheel Drive/2-Wheel Steering (AWD/2WS) rover with three different passive suspension configurations: 1) conventional rocker arms linked by a differential, 2) independent wheel suspensions guided by shock absorbers, and 3) a novel configuration based on the combination of the previous two, i.e., independent in-wheel compliant suspensions connected at each side of the rover by a free-balancing rocker. These configurations are referred to hereafter as 1) dependent-rigid (DR), 2) independent-elastic (IE), and what we named 3) mechanically-hybrid suspension or MHS. General characteristics of the rover model are presented in Table <ref>.
The dynamic model was based on the design of ElDorado-2 <cit.>, a long-standing robotic platform previously used in the Space Robotics Lab at Tohoku University (see Fig. <ref>). Each suspension configuration presented the same mass distribution and suspension kinematics. In the case of the IE configuration, the rocker arms were locked in a horizontal position and a spring-damper system was introduced between the end of the arms and the wheel hub. Spring-damper systems were simulated by means of prismatic joints whose reaction force, F, is controlled by a proportional-derivative controller in which the proportional and derivative gains are replaced by the spring ratio, k, and damping coefficient, c, respectively (Eq. <ref>).
F = k e_i + c (e_1 - e_1-i)/Δ t,
where e_i describes the elongation of the joint at a time i, and Δ t the selected time step. Deformations within the linkages of the suspension were neglected. The defining parameters of the shock absorbers were 2 kN/m for the spring constant and 350 Ns/m for the damping coefficient with 35 mm of free-length. These are generic values intended to provide a stable movement and a limited static deflection. The optimization of suspension parameters requires the specific amplitude and frequency response of the unsprung and sprung masses against a particular set of excitations, which subsequently demands as inputs a list of mission-driven and design-specific requirements—a type of analysis for which this work was not intended.
§.§ Simulation modules
We developed a series of obstacle negotiation and gradeability modules. The environmental characteristics of each of these modules were based on features commonly found on the surface of the Moon (see Fig. <ref>).
The obstacle negotiation module consisted of three different submodules that vary based on the type of obstacle faced: step obstacles of increasing height, a dynamically-enabled 10-cm hemispherical rock, and a 1.5-m long outcrop—i.e., a partially exposed section of bedrock—with protrusions as high as 10 cm, all modeled within CoppeliaSim.
The gradeability module presented 1.5-m slopes of increasing inclination up to a maximum of 30. For the sake of maintaining the comparative analysis within reasonable margins, only situations where the robot faced the slopes at a heading angle of 0(straight climbing) were simulated. The robot initially drove on flat ground until the appropriate speed was reached.
A gravity field of 1.625 m/s^2 representative of the lunar surface was used for all our simulations. No closed-loop traction or motion control was implemented.
§.§ Wheel-soil contact model
The lunar surface is characterized by a top layer of fine-grained, slightly-cohesive regolith. A specific wheel-soil contact model would be necessary to accurately describe the complex behavior of a rigid wheel interacting with this kind of particulate material <cit.>. To the extent of our knowledge, none of the analytical models available in the literature—most of which are based on the Bekker-Reece-Wong terramechanic equations <cit.>—are capable of faithfully representing the full extent of physical phenomena taking place, much less those governing the dynamic interaction of a fast-moving, lightweight vehicle <cit.>. Numerical models were intentionally avoided due to the increased computational load these models require.
For the sake of simplicity, and in order to have a symbolic representation of the frictional behavior of lunar soil, we opted for a Coulomb friction model with an isotropic friction coefficient of 0.4 for the wheel-soil contact—frequently used as representative of metallic wheels rolling over sandy terrains—and 1.0 for the interaction with obstacles such as rocks and outcrops.
§.§ Performance evaluation parameters
The comparative evaluation of the performance was based on the success of each configuration, the maximum vertical load and pitch torque measured at both ends of the rocker arms, and the maximum vertical acceleration experienced by the chassis. Additionally, the trajectory of the robot was recorded to understand the level of longitudinal and lateral slippage future traction control systems would have to overcome.
§.§ Results and discussion
§.§.§ Obstacle negotiation performance
The heatmaps shown in Fig. <ref> represent the level of success of the different suspension configurations in overcoming perfect steps of height 1–12 cm at speeds ranging from 0.05–1 m/s. Due to their vertical profile, steps are considered the most challenging obstacle to negotiate. Green represents success, red indicates failure to drive over the step, and yellow defines situations in which the front wheel successfully overcame the step but the rear wheel was trapped or completely missed the step due to excessive lateral slippage.
Table <ref> lists the maximum values of vertical load, pitch torque, and acceleration experienced at top speed (1 m/s) in every case and for each configuration. The results obtained from the obstacle negotiation module illustrate the considerable benefit obtained from the addition of passive energy dissipation devices to the suspension design. On average among all the cases analyzed (steps, rocks, and outcrops), a 71% reduction in maximum impact load, a 37% reduction in maximum pitch torque, and a 33% reduction in maximum vertical acceleration of the chassis were observed when elasticity and damping were incorporated into the design. When compliant in-wheel suspensions were then combined with a high-range free-balancing rocker, the MHS outperformed the other two in every situation, overall reducing by 62% and 43% the detrimental effects of an irregular terrain when compared to the DR and IE configurations, respectively. The compliance of the in-wheel suspension attenuates the high-frequency/high-amplitude vibrations while the dependency of the rocker provides a more efficient weight transfer allowing the rover to overcome large obstacles without inducing excessive traction losses or instabilities (see fig:gforce).
Additional evidence of the improved stability brought by the MHS configuration is illustrated by the vertical trajectory of the robot when traversing the outcrop (see Fig. <ref>). With both the DR and IE suspensions, the robot experienced a full take-off (four wheels in the air) followed by a complete rollover, a situation that was avoided in the case of the MHS. The suspension kept the rover stable and the wheels always in contact with the jagged surface except when first impacting the edge of the outcrop when both front wheels were briefly lifted from the ground due to the sudden rebound of the in-wheel suspension; a behavior that could be mitigated with a further optimization of the suspension parameters.
§.§.§ Gradeability
Less variation in the level of success of the different configurations was observed when the rover faced 1.5-m slopes of 5–30at speeds ranging from 0.05–1 m/s (see Fig. <ref>). At higher speeds and regardless of the configuration, the top of the steepest slopes (20and 25) were often reached with just the rear wheels in contact with the ground—a predominant effect in the IE and MHS configurations due to the excessive rebound of the suspension upon first confronting the slope.
The maximum vertical loads, pitch torques, and vertical accelerations experienced by the rover when facing a slope of 20at 1 m/s are gathered in Table <ref>. We initially expected greater levels of variation in the gradeability performance of the different configurations given the evidence presented in <cit.>. In this work, the climbing ability of rocker arms evidently outperformed that of independent swing arms under every circumstance evaluated. Due to the slip-dominant nature of the vehicle-ground interaction when climbing a slope, it is possible that the absence of a more accurate representation of the wheel-soil contact behavior in our simulation modules and the lack of an active control scheme resulted in the lack of variation in the levels of success of the different suspension configurations. Nonetheless, and in line with the observations previously made, the MHS configuration successfully mitigated the negative effects of impacts and vibrations beyond what was accomplished by either the DR or IE configurations.
§ SYSTEM DESIGN
In light of the evidence provided by the results of the simulations, we conceived a new rover prototype, dubbed Explorer 1 (EX1), based on the principles of the MHS configuration (see Fig. <ref>).
EX1 was designed with a 4-wheel AWD/4WS locomotion configuration capable of achieving a maximum operational speed of 1 m/s. High-travel aluminum rocker arms are linked together and attached to the chassis through a 3-gear differential box housed inside the body frame. These rocker arms have a range of motion of about ±250 mm (2.5 times its wheel radius), only limited by the length of the wire harness of the actuator drive electronics. Attached at both ends of each rocker is a double-coil-over elastic suspension providing a lower travel range of 35 mm. The harmonic drives of the steering motors act as the connecting pieces between the rocker arms and the compliant component of the suspension. This allows the latter part of the suspension to rotate with the wheel during steering, but has the inconvenience of varying the scrub radius—the distance between the steering axis and the vertical centerline of the wheel—based on the level of compression of the suspension; a shortcoming that was assumed in favor of the modularity of the design (i.e., the design is easily adaptable to 6- and 8-wheel configurations) and due to the short free-length of the damper.
The low-travel suspension consists of upper and lower control arms passively commanded by a pair of 104-mm shock absorbers connected to the top of the wheel knuckle (see fig:lts). This arrangement maintains the camber angle nearly constant during wheel travel. Two parallel shock absorbers were used to reduce the stiffness required on the springs while providing a certain level of redundancy in the design. The shock absorbers are formed by a replaceable 2.5 kN/m spring (5 kN/m per wheel) and an adjustable damper and were selected off-the-shelf from a radio-control car manufacturer. Both the bracket and the wheel knuckle were designed with multiple mounting points so that the overall stiffness of the suspension can be slightly modified by tilting the orientation of the dampers. The stability limits of the design were verified in simulations, achieving a static longitudinal/lateral stability under lunar gravity of 30and a quick and smooth response to dynamic perturbations such as steps and cornering maneuvers (see fig:stability).
§ FIELD TEST RESULTS
To validate the locomotor performance of EX1, we conducted a field test campaign in a representative sandy field. While these tests allow us to functionally validate a new suspension design for ground testing purposes, it should be noted from the outset the inadequacy of our approach to testing for the optimization of design parameters with respect to a potential flight model configuration. The conventional approach to validating the mobility performance of planetary rovers on Earth prior to their missions suffers from a strong limitation in situations when speed plays a role. While gravity scaling is often applied to testing platforms—i.e., adapting the mass of engineering models to represent the overall weight of the flight model at destination—observable behaviors under testing are only representative when the quasi-static approximation can be applied. The moment dynamic effects dominate the behavior of the rover <cit.> and its interaction with the ground <cit.>, as is the case with our experiments, the full-body mass of the rover shall be used for a representative characterization of the performance and subsequent optimization of design parameters. This drastically affects the rover response to environmental and operational stimuli <cit.>. In these cases, gravity offloading must be applied <cit.> but further work would still be required to properly model the complex interplay of inertial, gravitational, and frictional forces taking place.
§.§ Dynamic stability
The first experiment was aimed at evaluating the contribution of the independent shock absorbers when moving at high speed over a 10-m, nearly-flat, unconsolidated ground in both transient and steady-state conditions. In this case, the rover was commanded to follow a straight trajectory divided into three phases: a) a first phase where the rover accelerates up to 1.0 m/s, b) a second phase where the rover runs at a uniform maximum speed of 1.0 m/s, and c) a final phase where the rover is decelerating to a full stop (see fig:maneuverability_field_test).
We performed these tests with the rover in two different suspension configurations: a representative DR configuration, in which the rocker is free to rotate but the shock absorbers are replaced by rigid elements locking the low-travel suspension in place; and the MHS, with the elastic elements of the suspension free to move. Six runs were conducted for each suspension configuration. Table <ref> lists the result of comparing the two configurations based on the vertical acceleration of the chassis as recorded by an IMU fixed to the top of the attachment element of the left-side rocker (see fig:ex1). To reduce the level of sensor read noise, a 4-point moving average filter was applied before extracting max. and min. values, while the mean of the standard deviation of the vertical accelerations was computed from the original, unfiltered data across all six runs.
Results confirm an overall reduction of the vertical accelerations experienced by the chassis when the MHS is used. This is particularly significant during the acceleration phase where the rover experienced greater vibrations due to an observed increase in wheel slippage. This is aligned with a well-established understanding of the performance of elastic suspensions in off-road terrestrial vehicles but it was important to evaluate the potential interference that in-wheel shock absorbers could have on the movement of the free-balancing rockers, less common in terrestrial applications.
§.§ Obstacle negotiation capability
In this second experiment, the rover was commanded to drive its left-side wheels over a 10-cm rock (see fig:obstacle_negotiation_field_test). We now wanted to observe the potential differences in performance when dependency was introduced into the design of a conventional independent suspension configuration. We compared the obstacle negotiation capability between an IE configuration (i.e., rocker rotation locked) and the MHS. Tests were performed at three different speeds (0.2, 0.5, and 1.0 m/s) and each test was conducted three times for each suspension configuration and speed. The magnitude of the force applied to the front wheels was recorded by an in-wheel force/torque sensor and vertical accelerations were measured by the same IMU as in the dynamic stability tests.
Table <ref> also gathers all the IMU measurements recorded during the obstacle negotiation tests. Both the IE and MHS configurations successfully overcome the obstacle at 0.2 and 0.5 m/s but it was only with the MHS that the rover was capable of seamlessly negotiating the rock at 1.0 m/s. At this speed, the observable impact on the IE configuration was such that no successful runs were ultimately conducted with this configuration due to concerns over safety and the structural integrity of the rover. fig:force_norm_comparison_in_obstacle_negotiation display the average of the norm of the force vector acting on the front wheels when overcoming the rock for both the IE and MHS configurations. Dampening of impact loads on the front-left wheel (both mean and maximum force) was also greater in the case of the MHS and the degree of dampening increased with speed, reducing the loads by 24% on average across wheels and speeds. The main benefit associated with the addition of dependency is the increased pressure exerted on the wheels not overcoming the obstacle, enabling the right side wheels to greater traction levels and drastically improving obstacle-surmounting capabilities of the rover (see fig:force_norm_comparison_in_obstacle_negotiationfig:right_wheel_force).
§ CONCLUSION
The increased autonomy demanded by upcoming missions to the Moon and Mars implies planetary robots have to be capable of coping with a wide range of disturbances. The addition of compliant elements to the suspension system of these robots appeared to be vital in counteracting the detrimental effects of impact loads and vibrations when driving at high velocities (≥ 1 m/s) under weaker gravity fields. But even when these elements are included, the specific configuration of the suspension design plays an important role in the rover ultimate performance. A new passive suspension configuration, so-called mechanically-hybrid suspension (MHS), was proposed and compared with more traditional rocker and independent swing arm suspensions. The MHS combines the functional benefits of both dependent and elastic elements. Simulation results under a lunar-like gravity field confirmed our initial hypothesis. Field test results validated that an MHS configuration could greatly improve stability while successfully isolating the chassis of the rover from unwanted vibrations and impact loads beyond what could be accomplished with either of the other two commonly used passive configurations. An improved suspension design also affects other aspects involved in navigation, lowering the demand on perception and control systems, increasing duty cycles, and enabling higher levels of autonomy. Future work will explore additional improvements and variations in the suspension configuration such as combining the MHS with non-pneumatic, flexible wheels. Their combination could bring about higher levels of stability and terrain compliance while further reducing non-vertical impact loads and vibrations.
§ ACKNOWLEDGMENT
The authors would like to thank Alan Allart, Tristan Lecocq, Kazuki Nakagoshi, Ryusuke Wada, Danishi Ai, and Merlijn Siffels for their invaluable help and support in the development of EX1.
|
http://arxiv.org/abs/2307.05413v1 | 20230711162942 | The merger of co-rotating vortices in dusty flows | [
"Shuai Shuai",
"Anubhab Roy",
"M. Houssem Kasbaoui"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Optimizing Scientific Data Transfer on Globus with Error-bounded Lossy Compression
Yuanjian Liu1, Sheng Di2, Kyle Chard12, Ian Foster12, Franck Cappello2
1
University of Chicago, Chicago, IL, USA
2
Argonne National Laboratory, Lemont, IL, USA
[email protected], [email protected], [email protected], [email protected],
[email protected]
Corresponding author: Sheng Di, Mathematics and Computer Science Division, Argonne National Laboratory, 9700 Cass Avenue, Lemont, IL 60439, USA
August 12, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================
We investigate the effect of particle inertia on the merger of co-rotating dusty vortex pairs at semi-dilute concentrations. Using Eulerian-Lagrangian simulations, we show substantial departure from the vortex dynamics previously established in particle-free flows. Most strikingly, we find that dispersed particles with moderate inertia cause the vortex pair to push apart to a separation nearly twice as large as the initial separation. We find that antisymmetric vorticity generated by particles flung out of the rotational cores causes the vortex pair repulsion. Eventually, the two dusty vortices merge into a single vortex with most particles accumulating outside the core similar to the dusty Lamb-Oseen vortex described in Shuai & Kasbaoui (J. Fluid Mech., vol 936, 2022, A8). For weakly inertial particles, we find that the merger dynamics follow the same mechanics as those of a single-phase flow, albeit with a density that must be adjusted to match the mixture density. Lastly, highly inertial particles tend to fragment the vortex cores leading to murky merger dynamics.
keyword 1, keyword 2, keyword 3
§ INTRODUCTION
The majority of prior work on vortex merger concerned single-phase flows <cit.>, where it was shown that the merger of two co-rotating vortices with equal strength is initiated only once the ratio of vortex core size a to pair separation b reaches a critical value. If the ratio a/b is below (a/b)_crit, the two vortices rotate around one another, while their separation remains mostly constant and their sizes grow due to viscous diffusion. <cit.> defined this stage as viscous metastable stage with a lifetime that depends on the dissipation time scale. As the vortices expand, the ratio a/b eventually reaches the critical threshold (a/b)_crit and the convective stage follows, in which the vortices move toward each other rapidly. <cit.> found the the critical threshold to be (a/b)_crit = 0.29-0.32 in water tank experiments. Later, <cit.> also found (a/b)_crit≃ 0.29. By analyzing the vorticity field, <cit.> determined that the merger is initiated once the two vortices are sufficiently close to generate antisymmetric vorticity, which occurs when a/b∼ (a/b)_crit. During this stage, the induced velocity pulls the two vortices together resulting in the two vortices becoming intertwined. Viscous diffusion activates once more to smooth the large vorticity gradients resulting from the merger. A single elliptical vortex remains at the end.
To the best of our knowledge, vortex merger in dusty flows has not been previously investigated. Yet, the dynamics in these flows may deviate considerably from those in particle-free flows. This is especially true for dusty flows with particle volume fraction greater than ϕ>10^-5, since this leads to a significant feedback force from the particles on the fluid. In this so-called two-way coupling regime, the dispersed phase may cause large flow modulation <cit.>. Recently, we have shown that the interaction between dispersed inertial particles and a single vortex alters the dynamics from what is commonly understood from particle-free vortex dynamics <cit.>. For example, a two-way coupled dusty Lamb-Oseen vortex decays significantly faster than a particle-free one <cit.>. This enhanced decay is due to the ejection of inertial particles from the vortical core. While the particles cluster into a ring surrounding the vortex, their feedback force on the fluid leads to faster decay of the flow structure. Perhaps an even more striking effect is the fact that two-way coupled inertial particles dispersed in the core of a two-dimensional vortex trigger an instability <cit.>. This is in contrast to the remarkable stability of particle-free vortices to two-dimensional perturbations <cit.>. We have shown in <cit.> that the ejection of the particles from the vortex core activates a centrifugal Rayleigh-Taylor instability, that persists even for non-inertial particles.
In light of these previous findings, it is expected that the merger dynamics in two-way coupled dusty flows will be considerably different from those noted by <cit.>. In the present study, we revisit the problem of co-rotating vortices with equal strength, augmented with mono-disperse inertial particles. We use Eulerian-Lagrangian simulations to show new merger mechanisms that depend on the particle inertia. A surprising outcome is that inertial particles may even temporarily push apart the two vortices.
§ PARTICLE-FREE VORTEX MERGER
Before addressing how inertial particles may modulate the merger dynamics, we first describe the different stages of vortex pair merger in a particle-free case that will be used in <ref> to assess the effect of introducing inertial particles. The particle-free case considered matches the experiments of <cit.>, and is characterized by a circulation Reynolds number _Γ=ρ_f Γ/μ_f=530, where Γ is the vortex circulation, ρ_f, and μ_f are the fluid density and viscosity, respectively. In the following simulations, two co-rotating Lamb-Oseen vortices with equal radii a are initialized with a separation b_0, such that the ratio a/b_0=0.17 is initially below the merger threshold (a/b)_crit=0.29. This indicates that the two vortices will not merge immediately, but rather undergo a first diffusive stage before the onset of a convective stage. The angular velocity is defined as ω_Γ=Γ/(2π a^2). The Navier-Stokes equations are integrated using the the computational approach described in <cit.>. The simulation grid is uniform with a high spatial resolution a/Δ x ≈ 51 to provide good resolution of the vortex cores.
According to <cit.>, the merger of two co-rotating vortices follows three stages called the first diffusive stage, the convective stage, and the second diffusive stage. All these stages are reproduced in our simulations as can be seen in figure <ref> which show the evolution of the normalized vortex separation b/b_0, and normalized axial vorticity, respectively. The two vortices rotate around one another, but their separation remains constant as can be seen in figure <ref>. After a time period t_D, the cores grew sufficiently to reach the merger threshold (a/b)_crit=0.29, and the convective merger is initiated. During this stage, the separation decreases linearly. The vortices are deformed significantly as shown at (t-t_D)Γ/ b_0^2=0 and 6 in figure <ref>. The second diffusive stage occurs between 6≲ (t-t_D)Γ/ b_0^2≲ 9, after which the vortex pair can be considered fully merged.
§ PARTICLE-LADEN VORTEX MERGER
§.§ Simulation configuration and vortex center identification
We now reconsider the merger of the vortex pair in <ref> under dusty flow conditions, i.e., now, we consider the effects of dispersed particles on the merger dynamics.
To that end, we conduct pseudo-2D Eulerian-Lagrangian simulations with the same methodology previously deployed in <cit.> and <cit.>. Except for the introduction of randomly placed particles, the simulation configuration remains the same as described in <ref>. The particles have diameter d_p, density ρ_p, and are initialized with velocities that match the fluid velocity at their locations.
We consider seven cases where the particle inertia and mass loading are varied. Table <ref> lists a summary of the non-dimensional parameters in each case. Case A is the reference particle-free case. Cases B and C correspond to the limit of very low particle inertia, characterized by Stokes number St_Γ=τ_p/τ_f. Here τ_p=ρ_p d_p^2/18μ_f and τ_f=2π r_c^2/Γ are particle response time and fluid response time in vortex flow. In these two cases, M=⟨ϕ⟩ρ_p / ρ_f=0.5 or 1.0 and is varied by varying the average particle volume fraction ⟨ϕ⟩. In cases D-G, the Stokes number St is varied by changing the particle diameter as shown in Table <ref>. For all cases, the separation ratio is r_c/b_0=0.17 and the density ratio ρ_p/ρ_f is fixed at 2167.
The instantaneous vortex separation, i.e., the distance between two vortex centers, is a significant variable to investigate the vortex merger process. Unlike in particle-free merger where smooth vorticity fields allow easy detection of the centers, the feedback force from dispersed particles causes large vorticity fluctuations that make the detection of vortex center harder. The left picture in figure <ref> shows an example of the large vorticity fluctuations obtained in a particle-laden case. Due to this, we have found it necessary to filter the data from Euler-Lagrangian simulations to reliably detect the vortex centers.
To identify the vortex center, we first compute the filtered vorticity ω_z(𝐱) from the original vorticity field ω_z(𝐲) by convolving with a triangle filter kernel g with support δ_f. To retain the vortex feature after the filtering process, δ_f is set to be half of the initial vortex core radius, that is a_0/2. The right picture in figure <ref> shows the result of applying this filtering procedure to the field in the left side of figure <ref>. The gradient descent method is then applied to locate the vortex centers.
§.§ Weakly inertial particles
Figure <ref> shows the isocontours of normalized particle volume fraction and normalized filtered vorticity from tΓ/b_0^2=0 to 14 for St=0.01, 0.1 and 0.3 with mass loading M=1. Videos of these cases are provided in supplementary materials. From the snapshots in figure <ref>, it is clear that semi-dilute inertial particles alter the merger dynamics even at very small Stokes numbers. For the case at St=0.01, shown on the first and second rows of figure <ref>, the dynamics of the vorticity field remain qualitatively similar to those of the particle-free vortex pair, however, the merger takes significantly longer and the vortices are further stretched than in the single-phase case. During this process, the particles are gradually ejected from the two vortex cores under the effect of preferential concentration <cit.>. By tΓ/b_0^2=7, this results in the formation of two distinct void fraction bubbles. These structures get progressively larger and more stretched, as can be seen at tΓ/b_0^2=10.5 and 14. As the vortices approach one another, a line of particles can be seen dividing the two vortices at tΓ/b_0^2=14. This line of particles forms because the region between the two vortices is dominated by straining, which draws in particles originally located outside the vortices, and those that have been ejected from the cores.
Figure <ref> shows the evolution of the vortex-pair separation for the particle free case A, and cases B (_Γ=0.01, M=1.0) and C (_Γ=0.01, M=0.5). While the particle-free vortices merge around tΓ/b_0^2≈ 16, merger occurs around tΓ/b_0^2≈ 28 with mass loadings M=0.5 and 1.
The observation in figures <ref> and <ref> suggest that, at first approximation, the merger of dusty vortex pair at ≪ 1 is similar to the merger of particle-free vortices in an effective fluid with density ρ_eff=(1+M)ρ_f. To justify this reasoning, consider a Two-Fluid model of the particle phase where the particle velocity field is expanded in the limit of small inertia as done in <cit.>
∂ϕ/∂ t + u_p·∇ϕ = -ϕ∇·u_p,
u_p = u_f - τ_p(∂u_f/∂ t+u_f·∇u_f)+O(τ_p^2),
Combining these equations with the fluid conservation equations
∇·u_f=0,
ρ_f (∂u_f/∂ t+u_f·∇u_f) = -∇ p +μ∇^2u_f + ρ_p ϕ/τ_p(u_p-u_f),
yields the following mixture equations
∂ρ_eff/∂ t + u_f·∇ρ_eff = τ_p {(ρ_eff-ρ_f) ∇u_f:∇u_f+∇ρ_eff/ρ_eff·(-∇ p +μ∇^2u_f)},
ρ_eff(∂u_f/∂ t+u_f·∇u_f) = -∇ p +μ∇^2u_f,
where ρ_eff=(1+Mϕ/⟨ϕ⟩)ρ_f is the local effective density, the first term on the right-hand side of (<ref>) represents preferential concentration and the second term is due to the slip between the two phases. Thus, in the limit of negligible inertia, i.e., τ_p→ 0, or equivalently, _Γ→ 0, the inertial effects due to preferential concentration and slip vanish making equations (<ref>) and (<ref>) identical to those of a single-phase fluid with effective density ρ_eff.
To verify this hypothesis, we conducted additional simulations of single-phase merger where the fluid density equals ρ_eff=(1+M)ρ_f for M=0.5 and M=1.0. Comparison of the vortex-pair separation from these simulations with the separation measured in the particle-laden cases B and C (see figure <ref>) shows excellent agreement during most of the merger. Deviations that can be seen for tΓ/b_0^2≳ 18 are likely due to inertial effects which, as suggested by the growth of the void bubbles, become significant as time progresses, despite the low Stokes number _Γ = 0.01.
§.§ Moderately inertial particles
While the merger dynamics of a dusty flow with weakly inertial particles (_Γ≪ 1) are qualitatively similar to those of a particle-free flow, new dynamics emerge with increasing particle inertia. The most notable change noted in our simulations with 0.05 ≤_Γ≤ 0.2 is that the eventual merger of the vortex pair starts first by the two vortices pushing apart.
The void bubbles in case E (_Γ = 0.1, M=1.0), shown in figure <ref>, grow significantly faster than in the low inertia case B, as the effects of preferential concentration intensify with increasing particle inertia <cit.>. Further, the deformation of vortex cores and the void bubbles starts earlier suggesting that this process is related to the particle inertia. Due to the faster depletion of the cores, the particle line separating the two vortices appears earlier, at around tΓ/b_0^2=7, and becomes thinner as the merger progresses. During this early transient tΓ/b_0^2≲ 10.5, the two cores push apart leading to an increase in the separation compared to the initial state. The cores start approaching one another only once the line of particles separating the cores becomes sufficiently thin, which eventually ruptures.
Figure <ref> shows the evolution of the normalized separation b/b_0 for cases D-E, alongside the data for the particle-free case A. While tΓ/b_0^2≲ 5, the separation remains approximately constant. During this stage, the two vortices are mostly independent from one another and evolve according to dynamics similar to those reported in <cit.>. Unlike in single-phase merger where the growth of the vortex cores is exclusively driven by viscosity, the growth of the cores and void bubbles are interlinked as the feedback force from the particles exiting the cores causes greater spreading of the vorticity field. Time tΓ/b_0^2≃ 5 marks the start of a new stage, that we call repulsion stage, and which lasts until tΓ/b_0^2≃ 18 at _Γ =0.05 and tΓ/b_0^2≃ 16 at _Γ =0.1. During this stage, the vortex pair separation increases reaching up to b/b_0 ∼ 1.5 at _Γ =0.1, which appears to be a saturation limit. At the end of this stage, the two void regions have merged, resulting in a large particle-free region containing the the two vortices. The dynamics from here and onward follow those of single-phase merger.
To elucidate the mechanism driving the repulsion stage, we follow the approach of <cit.>, in which vorticity field is decomposed into symmetric and antisymmetric components. We first rotate the filtered vorticity field ω_z (x,y) so that the two vortex centers lay on the horizontal line. Figure <ref>(a) shows the filtered vorticity field ω_z (x,y) without the rotation and figure <ref>(b) shows the vorticity field ω_z (x',y') after the rotation. The rotated vorticity ω_z (x',y') is then decomposed as
ω_z (x',y')= 1/2[ω_z (x',y') + ω_z (-x',y')] + 1/2[ω_z (x',y') - ω_z (-x',y')]=ω_S+ω_A
where ω_S is the symmetric vorticity and ω_A is the antisymmetric vorticity, as shown in figure <ref>(c) and figure <ref>(d) respectively. As argued by <cit.>, it is only the antisymmetric vorticity field ω_A that contributes to the change of separation. Depending on the symmetries of ω_A, the induced velocity field may either pull together or push apart the vortex cores. Figure <ref> shows the antisymmetric vorticity field for the cases of _Γ=0.01, 0.05 and 0.1 at successive instants. It is seen that for all cases the vorticity field has four pairs of counter-rotating vortices except for the later time around tΓ/b_0^2=14. The induced velocity from two inner vortex pairs pushes the two vortex centers apart, whereas the velocity generated from two outer counter-rotating vortex pairs pulls the two centers together. The vorticity intensity difference between inner side and outer side determines the change of separation. At tΓ/b_0^2=5, it is seen that the vorticity intensity of two inner vortex pairs is roughly equal to that of two outer pairs, resulting in the stable separation distance between two vortex centers at an early time when tΓ/b_0^2≤ 5 during the first diffusive stage. At tΓ/b_0^2=9, contrary to the low inertia case (St=0.01) where two inner vortex pairs are smaller than the outer pairs, the presence of moderate inertial particles (St=0.05 and 0.1) induces larger inner side vorticity, leading to the increase of separation when 5≤ tΓ/b_0^2 ≤ 11, as shown in <ref>. Whereas the separation decreases for St=0.01 during this period. At tΓ/b_0^2=14, figure <ref> shows that the vorticity intensity of outer side is much larger than the inner side for all three cases, which causes a rapid merger of two co-rotating vortices at later time.
§.§ Highly inertial particles
With increasing Stokes number, the feedback force from the particles increasingly distorts the vortical structures making the merger murkier. This is illustrated in case G (_Γ=0.3, M=1), in figure <ref>, where the vortices appear highly stretched at the early time tΓ/b_0^2 =3.5. This extreme distortion causes each vortex to split into two smaller vortices, an inner one and an outer one, as can be clearly seen at tΓ/b_0^2 = 7. From figure <ref> and <ref>, the inner vortices start merging around tΓ/b_0^2 = 18 for case F (_Γ=0.2) and tΓ/b_0^2 = 14 for case G (_Γ=0.3), with no repulsion stage. Meanwhile, the outer vortices start with a repulsive stage for 3≲ tΓ/b_0^2 ≲ 9, during which the centers push apart to a maximum distance b/b_0≃ 1.9. This stage is followed by a convective stage and a second diffusive stage between 9≲ tΓ/b_0^2 ≃ 16. At the end, a single distorted vortex is left, enclosed inside a larger void fraction bubble.
§ CONCLUSION
Eulerian-Lagrangian simulations of the merger of co-rotating vortices laden with inertial particles reveal new mechanics specific to dusty flows. The present simulations were carried out in the semi-dilute regime, specifically for particle volume fractions ⟨ϕ⟩=2.3 - 4.6 × 10^-4 and mass loading M=0.5 - 1.0. Despite the low particle concentration, dusty flows in this regime have strong momentum coupling between the carrier and disperse phase since the mass loading is order unity. To investigate the effect of particle inertia, we varied the Stokes number _Γ in the range 0.01 to 0.3. We found that these particles can be classified into three main categories. Particles that have a Stokes number _Γ≤ 0.01 are considered weakly inertial. With such particles, the merger of dusty vortices is delayed compared to the merger of particle-free vortices. However, the merger dynamics are not much different from those of the particle-free case, if one considers the particle-fluid mixture as an effective fluid with density ρ_eff=(1+M)ρ_f. Particles with Stokes number in the range ∼ 0.05 to ∼ 0.2 are considered as moderately inertial. In this case, the merger of a dusty vortex pair exhibits an additional stage characterized by a temporary repulsion of the vortex cores, before undergoing successive convective merger and second diffusive stages. Analyzing the antisymmetric vorticity field, we find that the vortex separation increase is caused by a repulsive force generated by the ejection of particles from the vortex cores. Once all particles have been ejected from the inner region separating the two cores, the merger is initiated and follows dynamics similar to those of particle-free vortex pairs. Highly inertial particles are those with _Γ≳ 0.3. The merger becomes murkier, as the feedback force from these particles distorts significantly the vortex cores. For the case with _Γ=0.3, we find that each vortex splits into two parts and that the inner vortices merger first followed by the outer ones. In all cases, the final outcome of the merger is a single vortex with a core that is depleted from all particles and a surrounding hallo of high particle concentration.
Acknowledgments. The authors acknowledge support from the US National Science Foundation (award #2148710, CBET-PMP), and from IIT Madras for the “Geophysical Flows Lab” research initiative under the Institute of Eminence framework.
Declaration of Interests. The authors report no conflict of interest.
|
http://arxiv.org/abs/2307.04707v2 | 20230710170651 | Asymptotic Complexity Estimates for Probabilistic Programs and their VASS Abstractions | [
"Michal Ajdarów",
"Antonín Kučera"
] | cs.FL | [
"cs.FL"
] |
Topological recursion of the Weil–Petersson volumes of hyperbolic surfaces with tight boundaries
Timothy Budd and Bart Zonneveld
IMAPP, Radboud University, Nijmegen, The Netherlands.
August 12, 2023
=================================================================================================
The standard approach to analyzing the asymptotic complexity of probabilistic programs is based on studying the asymptotic growth of certain expected values (such as the expected termination time) for increasing input size. We argue that this approach is not sufficiently robust, especially in situations when the expectations are infinite. We propose new estimates for the asymptotic analysis of probabilistic programs with non-deterministic choice that overcome this deficiency. Furthermore, we show how to efficiently compute/analyze these estimates for selected classes of programs represented as Markov decision processes over vector addition systems with states.
§ INTRODUCTION
Vector Addition Systems with States (VASS) <cit.> are a model for discrete systems with multiple unbounded resources expressively equivalent to Petri nets <cit.>. Intuitively, a VASS with d ≥ 1 counters is a finite directed graph where the transitions are labeled by d-dimensional vectors of integers representing counter updates. A computation starts in some state for some initial vector of non-negative counter values and proceeds by selecting transitions non-deterministically and performing the associated counter updates. Since the counters cannot assume negative values, transitions that would decrease some counter below zero are disabled.
In program analysis, VASS are used as abstractions for programs operating over unbounded integer variables. Input parameters are represented by initial counter values, and more complicated arithmetical functions, such as multiplication, are modeled by VASS gadgets computing these functions in a weak sense (see, e.g., <cit.>). Branching constructs, such as if-then-else, are usually replaced with non-deterministic choice. VASS are particularly useful for evaluating the asymptotic complexity of infinite-state programs, i.e., the dependency of the running time (and other complexity measures) on the size of the program input <cit.>. Traditional VASS decision problems such as reachability, liveness, or boundedness are computationally hard <cit.>, and other verification problems such as equivalence-checking <cit.> or model-checking <cit.> are even undecidable. In contrast to this, decision problems related to the asymptotic growth of VASS complexity measures are solvable with low complexity and sometimes even in polynomial time <cit.>; see <cit.> for a recent overview.
The existing results about VASS asymptotic analysis are applicable to programs with non-determinism (in demonic or angelic form, see <cit.>), but cannot be used to analyze the complexity of probabilistic programs. This motivates the study of Markov decision process over VASS (VASS MDPs) with both non-deterministic and probabilistic states, where transitions in probabilistic states are selected according to fixed probability distributions. Here, the problems of asymptotic complexity analysis become even more challenging because VASS MDPs subsume infinite-state stochastic models that are notoriously hard to analyze. So far, the only existing result about asymptotic VASS MDP analysis is <cit.> where the linearity of expected termination time is shown decidable in polynomial time for VASS MDPs with DAG-like MEC decomposition.
Our Contribution: We study the problems of asymptotic complexity analysis for probabilistic programs and their VASS abstractions.
For non-deterministic programs, termination complexity is a function _max assigning to every n ∈ the length of the longest computation initiated in a configuration with each counter set to n. A natural way of generalizing this concept to probabilistic programs is to define a function _ such that _(n) is the maximal expected length of a computation initiated in a configuration of size n, where the maximum is taken over all strategies resolving non-determinism. The same approach is applicable to other complexity measures. We show that this natural idea is generally inappropriate, especially in situations when _(n) is infinite for a sufficiently large n. By “inappropriate” we mean that this form of asymptotic analysis can be misleading. For example, if _(n) = ∞ for all n ≥ 1, one may conclude that the computation takes a very long time independently of n. However, this is not necessarily the case, as demonstrated in a simple example of Fig. <ref> (we refer to Section <ref> for a detailed discussion). Therefore, we propose new notions of lower/upper/tight complexity estimates and demonstrate their advantages over the expected values. These notions can be adapted to other models of probabilistic programs, and constitute the main conceptual contribution of our work.
Then, we concentrate on algorithmic properties of the complexity estimates in the setting of VASS MDPs. Our first result concerns counter complexity. We show that for every VASS MDP with DAG-like MEC decomposition and every counter c, there are only two possibilities:
* The function n is a tight estimate of the asymptotic growth of the maximal c-counter value assumed along a computation initiated in a configuration of size n.
* The function n^2 is a lower estimate of the asymptotic growth of the maximal c-counter value assumed along a computation initiated in a configuration of size n.
Furthermore, it is decidable in polynomial time which of these alternatives holds.
Since the termination and transition complexities can be easily encoded as the counter complexity for a fresh “step counter”, the above result immediately extends also to these complexities. To some extent, this result can be seen as a generalization of the result about termination complexity presented in <cit.>. See Section <ref> for more details.
Our next result is a full classification of asymptotic complexity for one-dimensional VASS MDPs. We show that for every one-dimensional VASS MDP
* the counter complexity is either unbounded or n is a tight estimate;
* termination complexity is either unbounded or one of the functions n, n^2 is a tight estimate.
* transition complexity is either unbounded, or bounded by a constant, or one of the functions n, n^2 is a tight estimate.
Furthermore, it is decidable in polynomial time which of the above cases hold.
Since the complexity of the considered problems remains low, the results are encouraging. On the other hand, they require non-trivial insights, indicating that establishing a full and effective classification of the asymptotic complexity of multi-dimensional VASS MDPs is a challenging problem.
§ PRELIMINARIES
We use , , , and to denote the sets of non-negative integers,
integers, rational numbers, and real numbers.
Given a function f →, we use O(f) and Ω(f) to denote the sets of all g → such that g(n) ≤ a · f(n) and g(n) ≥ b · f(n) for all sufficiently large n ∈, where a,b are some positive constants. If h ∈ O(f) and h ∈Ω(f), we write h ∈Θ(f).
Let A be a finite index set. The vectors of ^A are denoted by bold letters such as ,,,…. The component of of index i∈ A is denoted by (i).
If the index set is of the form A={1,2,…,d} for some positive integer d, we write ^d instead of ^A. For every n ∈, we use to denote the constant vector where all
components are equal to n.
The other
standard operations and relations on such as
+, ≤, or < are extended to ^d in the component-wise way. In particular, < if (i) < (i) for every index i.
A probability distribution over a finite set A is a vector ν∈ [0,1]^A such that ∑_a ∈ Aν(a) = 1. We say that ν is rational if every ν(a) is rational, and Dirac if ν(a) =1 for some a ∈ A.
§.§ VASS Markov Decision Processes
Let d ≥ 1. A d-dimensional VASS MDP is a tuple = Q, (Q_n,Q_p),T,P, where
* Q ≠∅ is a finite set of states split into two disjoint subsets Q_n and Q_p of nondeterministic and probabilistic states,
* T ⊆ Q ×^d× Q is a finite set of transitions such that, for every p ∈ Q, the set (p) ⊆ T of all transitions of the form (p,,q) is non-empty.
* P is a function assigning to each t ∈(p) where p ∈ Q_p a positive rational probability so that ∑_t ∈ T(p) P(t) =1.
The encoding size of is denoted by , where the integers representing counter updates are written in binary and probability values are written as fractions of binary numbers. For every p ∈ Q, we use (p) ⊆ T to denote the set of all transitions of the form (q,,p). The update vector of a transition t = (p,,q) is also denoted by _t.
A finite path in of length n ≥ 0 is a finite sequence of the form
p_0,_1,p_1,_2,…,_n,p_n where (p_i,_i+1,p_i+1) ∈ T for all i<n. We use (α) to denote the length of α. If there is a finite path from p to q, we say that q is reachable from p.
An infinite path in is an infinite sequence π = p_0,_1,p_1,_2,… such that every finite prefix of π ending in a state is a finite path in .
A strategy is a function σ assigning to every finite path p_0,_1,…,p_n such that p_n ∈ Q_n a probability distribution over (p_n). A strategy is Markovian (M) if it depends only on the last state p_n, and deterministic (D) if it always returns a Dirac distribution. The set of all strategies is denoted by Σ_, or just Σ when is understood. Every initial state p ∈ Q and every strategy σ determine the probability space over infinite paths initiated in p in the standard way. We use ^σ_p to denote the associated probability measure.
A configuration of is a pair p, where p ∈ Q and ∈^d. If some component of is negative, then p is terminal. The set of all configurations of is denoted by ().
Every infinite path p_0,_1,p_1,_2,… and every initial vector ∈^d determine the corresponding computation of , i.e., the sequence of configurations p_0_0, p_1 _1, p_2 _2,… such that _0 = and _i+1 = _i + _i+1. Let (π) be the least j such that p_j_j is terminal. If there is no such j, we put (π) = ∞ .
Note that every computation uniquely determines its underlying infinite path. We define the probability space over all computations initiated in a given p, where the underlying probability measure ^σ_p is obtained from _p^σ in an obvious way. For a measurable function X over computations, we use ^σ_p [X] to denote the expected value of X.
§ ASYMPTOTIC COMPLEXITY MEASURES FOR VASS MDPS
In this section, we introduce asymptotic complexity estimates applicable to probabilistic programs with non-determinism and their abstract models (such as VASS MDPs). We also explain their relationship to the standard measures based on the expected values of relevant random variables.
Let us start with a simple motivating example. Consider the simple probabilistic program of Fig. <ref>. The program inputs a positive integer N and then repeatedly increments/decrements N with probability 0.5 until N=0. One can easily show that for every N ≥ 1, the program terminates with probability one, and the expected termination time is infinite. Based on this, one may conclude that the execution takes a very long time, independently of the initial value of N. However, this conclusion is not consistent with practical experience gained from trial runs[For N=1, about 95% of trial runs terminate after at most 1000 iterations of the repeat-until loop. For N=10, only about 75% of all runs terminate after at most 1000 iterations, but about
90% of them terminate after at most 10000 iterations.]. The program tends to terminate “relatively quickly” for small N, and the termination time does depend on N.
Hence, the function assigning ∞ to every N ≥ 1 is not a faithful characterization of the asymptotic growth of termination time. We propose an alternative characterization based on the following observations[Formal proofs of these observations are simple; in Section <ref>, we give a full classification of the asymptotic behaviour of one-dimensional VASS MDPs subsuming the trivial example of Fig. <ref>.]:
* For every ε >0, the probability of all runs terminating after more than n^2+ε steps (where n is the initial value of N) approaches zero as n →∞.
* For every ε >0, the probability of all runs terminating after more than n^2-ε steps (where n is the initial value of N) approaches one as n →∞.
Since the execution time is “squeezed” between n^2-ε and n^2+ε for an arbitrarily small ε > 0 as n →∞, it can be characterized as “asymptotically quadratic”. This analysis is in accordance with experimental outcomes.
§.§ Complexity of VASS Runs
We recall the complexity measures for VASS runs used in previous works <cit.>. These functions can be seen as variants of the standard time/space complexities for Turing machines.
Let = Q, (Q_n,Q_p),T,P be a d-dimensional VASS MDP, c ∈{1,…,d}, and t ∈ T. For every computation π = p_0 _0,p_1 _1,p_2 _2,…, we put
(π) = (π)
[c](π) = sup{_i(c) | 0 ≤ i < (π)}
[t](π) =
We refer to the functions , [c], and [t] as termination, c-counter, and t-transition complexity, respectively.
Let be one of the complexity functions defined above. In VASS abstractions of computer programs, the input is represented by initial counter values, and the input size corresponds to the maximal initial counter value. The existing works on non-probabilistic VASS concentrate on analyzing the asymptotic growth of the functions _max : →_∞ where
_max(n) = max{(π) |π}
For VASS MDP, we can generalize _max into _ as follows:
_(n) = max{_p^σ[] |σ∈Σ_, p ∈ Q}
Note that for non-probabilistic VASS, the values of _max(n) and _(n) are the same. However, the function _ suffers from the deficiency illustrated in the motivating example at the beginning of Section <ref>. To see this, consider the one-dimensional VASS MDP modeling the simple probabilistic program (see Fig. <ref>). For every n ≥ 1 and the only (trivial) strategy σ, we have that ^σ_p [ < ∞] = 1 and _(n) = ∞. However, the practical experience with trial runs of is the same as with the original probabilistic program (see above).
§.§ Asymptotic Complexity Estimates
In this section, we introduce asymptotic complexity estimates allowing for a precise analysis of the asymptotic growth of the termination, c-counter, and t-transition complexity, especially when their expected values are infinite for a sufficiently large input. For the sake of readability, we first present a simplified variant applicable to strongly connected VASS MDPs.
Let be one of the complexity functions for VASS computations defined in Section <ref>, and let f : →. We say that f is a tight estimate of if, for arbitrarily small ε > 0, the value of (n) is “squeezed” between f^1-ε(n) and f^1+ε(n) as n →∞. More precisely, for every ε > 0,
* there exist p ∈ Q and strategies σ_1,σ_2,… such that lim inf_n →∞ ^σ_n_p [≥ (f(n))^1-ε] = 1;
* for all p ∈ Q and strategies σ_1,σ_2,… we have that lim sup_n →∞ ^σ_n_p [≥ (f(n))^1+ε] = 0.
The above definition is adequate for strongly connected VASS MDPs because tight estimates tend to exist in this subclass. Despite some effort, we have not managed to construct an example of a strongly connected VASS MDP where an with some upper polynomial estimate does not have a tight estimate (see Conjecture <ref>). However, if the underlying graph of is not strongly connected, then the asymptotic growth of can differ for computations visiting a different sequence of maximal end components (MECs) of , and the asymptotic growth of can be “squeezed” between f^1-ε(n) and f^1+ε(n) only for the subset of computations visiting the same sequence of MECs. This explains why we need a more general definition of complexity estimates presented below.
An end component (EC) of is a pair (C,L) where C ⊆ Q and L ⊆ T such that the following conditions are satisfied:
* C ≠∅;
* if p ∈ C ∩ Q_n, then at least one outgoing transition of p belongs to L;
* if p ∈ C ∩ Q_p, then all outgoing transitions of p belong to L;
* if (p,,q) ∈ L, then p,q ∈ C;
* for all p,q ∈ C we have that q is reachable from p and vice versa.
Note that if (C,L) and (C',L') are ECs such that C ∩ C' ≠∅, then (C∪ C', L ∪ L') is also an EC. Hence, every p∈ Q either belongs to a unique maximal end component (MEC), or does not belong to any EC. Also observe that each MEC can be seen as a strongly connected VASS MDP. We say that has DAG-like MEC decomposition if for every pair M,M' of different MECs such that the states of M' are reachable from the states of M we have that the states of M are not reachable from the states of M'.
For every infinite path π of , let (π) be the unique sequence of MECs visited by π. Observe that (π) disregards the states that do not belong to any EC; intuitively, this is because the transitions executed in such states do not influence the asymptotic growth of . Observe that the length of (π), denoted by 𝑙𝑒𝑛((π)), can be finite or infinite. The first possibility corresponds to the situation when an infinite suffix of π stays within the same MEC. Furthermore, for all σ∈Σ and p ∈ Q, we have that ^σ_p[𝑙𝑒𝑛() = ∞] =0, and the probability ^σ_p[𝑙𝑒𝑛() ≥ k] decays exponentially in k (these folklore results are easy to prove). All of these notions are lifted to computations in an obvious way.
Observe that if a strategy σ aims at maximizing the growth of , we can safely assume that σ eventually stays in a bottom MEC that cannot be exited (intuitively, σ can always move from a non-bottom MEC to a bottom MEC by executing a few extra transitions that do not influence the asymptotic growth of , and the bottom MEC may allow increasing even further). On the other hand, the maximal asymptotic growth of may be achievable along some “minimal” sequence of MECs, and this information is certainly relevant for understanding the behaviour of a given probabilistic program. This leads to the following definition:
A type is a finite sequence β of MECs such that (π) = β for some infinite path π.
We say that f is a lower estimate of for a type β if for every ε > 0 there exist p ∈ Q and a sequence of strategies σ_1,σ_2,… such that ^σ_n_p[ = β] > 0 for all n ≥ 1 and
lim inf_n →∞ ^σ_n_p [≥ (f(n))^1-ε|=β] = 1 .
Similarly, we say that f is an upper estimate of for a type β if for every ε>0, every p ∈ Q, and every sequence of strategies σ_1,σ_2,… such that ^σ_n_p[ = β] > 0 for all n ≥ 1 we have that
lim sup_n →∞ ^σ_n_p [≥ (f(n))^1+ε|=β] = 0
If there is no upper estimate of for a type β, we say that is unbounded for β. Finally, we say that f is a tight estimate of for β if it is both a lower estimate and an upper estimate of for β.
Let us note that in the subclass of non-probabilistic VASS, MECs become strongly connected components (SCCs), and types correspond to paths in the directed acyclic graph of SCCs. Each such path determines the corresponding asymptotic increase of , as demonstrated in <cit.>. We conjecture that types play a similar role for VASS MDPs. More precisely, we conjecture the following:
If some polynomial is an upper estimate of for β, then there exists a tight estimate f of for β.
Even if Conjecture <ref> is proven wrong, there are interesting subclasses of VASS MDPs where it holds, as demonstrated in subsequent sections.
For every pair of MECs M,M', let P(M,M') be the maximal probability (achievable by some strategy) of reaching a state of M' from a state of M in without passing through a state of some other MEC M”. Note that P(M,M') is efficiently computable by standard methods for finite-state MDPs. The weight of a given type β = M_1,…,M_k is defined as (β) = ∏_i=1^k-1 P(M_i,M_i+1). Intuitively, (β) corresponds to the maximal probability of “enforcing” the asymptotic growth of according to the tight estimate f of for β achievable by some strategy.
Generally, higher asymptotic growth of may be achievable for types with smaller weights. Consider the following example to understand better the types, their weights, and the associated tight estimates.
Let be the VASS MDP of Fig. <ref>. There are four MECs M_1,M_2,M_3,M_4 where M_2,M_3,M_4 are bottom MECs. Hence, there are four types of length one and three types of length two. Let us examine the types of length two initiated in M_1
for ≡[c] where c is the third counter.
Note that in M_1, the first counter is repeatedly incremented/decremented with the same probability 1/2. The second counter “counts” these transitions and thus it is “pumped” to a quadratic value (cf. the VASS MDP of Fig. <ref>). Then, a strategy may decide to move to M_2, where the value of the second counter is transferred to the third counter. Hence, n^2 is the tight estimate of [c] for the type M_1,M_2, and (M_1,M_2) = 1. Alternatively, a strategy may decide to move to the probabilistic state q. Then, either M_3 or M_4 is entered with the same probability 1/2, which implies (M_1,M_3) = (M_1,M_4) = 1/2. In M_3, the third counter is unchanged, and hence n is the tight estimate of [c] for the type M_1,M_3. However, in M_4, the second counter previously pumped to a quadratic value is repeatedly incremented/decremented with the same probability 1/2, and the third counter “counts” these transitions. This means that n^4 is a tight estimate of [c] for the type M_1,M_4.
This analysis provides detailed information about the asymptotic growth of [c] in . Every type shows “how” the growth specified by the corresponding tight estimate is achievable, and its weight corresponds to the “maximal achievable probability of this growth”. This information is completely lost when analyzing the maximal expected value of [c] for computations initiated in configurations p where p is a state of M_1, because these expectations are infinite for all n ≥ 1.
Finally, let us clarify the relationship between the lower/upper estimates of and the asymptotic growth of _. The following observation is easy to prove.
If _∈ O(f) where f : → is an unbounded function, then f is an upper estimate of for every type. Furthermore, if f : → is a lower estimate of for some type, then _∈Ω(f^1-ϵ) for each ϵ>0. However, if _∈Ω(f) where f : →, then f is not necessarily a lower estimate of for some type.
Observation <ref> shows that complexity estimates are generally more informative than the asymptotics of _ even if _∈Θ(f) for some “reasonable” function f. For example, it may happen that there are only two types β_1 and β_2 where n and n^3 are tight estimates of for β_1 and β_2 with weights 0.99 and 0.01, respectively. In this case, _∈Θ(n^3), although the termination time is linear for 99% of computations.
§ A DICHOTOMY BETWEEN LINEAR AND QUADRATIC ESTIMATES
In this section, we prove the following result:
Let be a VASS MDP with DAG-like MEC decomposition and one of the complexity functions , [c], or [t]. For every type β, we have that either n is a tight estimate of for β, or n^2 is a lower estimate of for β. It is decidable in polynomial time which of the two cases holds.
Theorem <ref> can be seen as a generalization of the linear/quadratic dichotomy results previously achieved for non-deterministic VASS <cit.> and for the termination complexity in VASS MDPs <cit.>.
It suffices to prove Theorem <ref> for the counter complexity. The corresponding results for the termination and transition complexities then follow as simple consequences. To see this, observe that we can extend a given VASS MDP with a fresh “step counter” sc that is incremented by every transition (in the case of ) or the transition t (in the case of [t]) and thus “emulate” and [t] as [sc].
We first consider the case when is strongly connected and then generalize the obtained results to VASS MDPs with DAG-like MEC decomposition. So, let be a strongly connected d-dimensional VASS MDP and c a counter of . The starting point of our analysis is the dual constraint system designed in <cit.> for non-probabilistic strongly connected VASS. We generalize this system to strongly connected VASS MDPs in the way shown in Figure <ref> (the original system of <cit.> can be recovered by disregarding the probabilistic states).
Note that solutions of both (I) and (II) are closed under addition. Therefore, both (I) and (II) have solutions maximizing the specified objectives, computable in polynomial time.
For clarity, let us first discuss an intuitive interpretation of these solutions, starting with simplified variants obtained for non-probabilistic VASS in <cit.>.
In the non-probabilistic case, a solution of (I) can be interpreted as a weighted multicycle, i.e., as a collection of cycles M_1,…, M_k together with weights a_1,… ,a_k such that the total effect of the multicycle, defined by ∑_i=1^k a_i ·𝑒𝑓𝑓𝑒𝑐𝑡(M_i), is non-negative for every counter. Here, 𝑒𝑓𝑓𝑒𝑐𝑡(M_i) is the effect of M_i on the counters. The objective of (I) ensures that the multicycle includes as many transitions as possible, and the total effect of the multicycle is positive on as many counters as possible. For VASS MDPs, the M_1,…, M_k should not be interpreted as cycles but as Markovian strategies for some ECs, and 𝑒𝑓𝑓𝑒𝑐𝑡(M_i) corresponds to the vector of expected counter changes per transition in M_i. The objective of (I) then maximizes the number of transitions used in the strategies M_1,…, M_k, and the number of counters where the expected effect of the “multicycle” is positive.
A solution of (II) for non-probabilistic VASS can be interpreted as a ranking function for configurations defined by 𝑟𝑎𝑛𝑘(p)=(p)+∑_i=1^d(i)(i), such that the value of 𝑟𝑎𝑛𝑘 cannot increase when moving from a configuration p to a configuration q using a transition t=(p,-,q). The objective of (II) ensures that as many transitions as possible decrease the value of 𝑟𝑎𝑛𝑘, and 𝑟𝑎𝑛𝑘 depends on as many counters as possible. For VASS MDPs, this interpretation changes only for the outgoing transitions t=(p,,q) of probabilistic states. Instead of considering the change of 𝑟𝑎𝑛𝑘 caused by such t, we now consider the expected change of 𝑟𝑎𝑛𝑘 caused by executing a step from p. The objective ensures that 𝑟𝑎𝑛𝑘 depends on as many counters as possible, the value of 𝑟𝑎𝑛𝑘 is decreased by as many outgoing transitions of non-deterministic states as possible, and the expected change of 𝑟𝑎𝑛𝑘 caused by performing an step is negative in as many probabilistic states as possible.
The key tool for our analysis is the following dichotomy (a proof is in Appendix <ref>).
Let be a (maximal) solution to the constraint system (I) and , be a (maximal) solution to the constraint system (II). Then, for each counter c we have that either (c)>0 or ∑_t ∈ T(t)_t(c)>0, and for each transition t = (p,,q)∈ T we have that
* if p∈ Q_n then either (q)-(p)+∑_i=1^d(i)(i)<0 or (t)>0;
* if p∈ Q_p then either
∑_t'=(p,',q') ∈(p)P(t')((q')-(p)+∑_i=1^d'(i)(i))< 0
or (t)>0.
For the rest of this section, we fix a maximal solution of (I) and a maximal solution , of (II), such that the smallest non-zero element of , is at least 1. We define a ranking function 𝑟𝑎𝑛𝑘: ()→ as 𝑟𝑎𝑛𝑘(s)=(s)+∑_i=1^d(i)(i). Now we prove the following theorem:
For each counter c, if (c)>0 then n is a tight estimate of [c] (for the only type of ). Otherwise, i.e., when (c)=0, the function n^2 is a lower estimate of [c].
Note that Theorem <ref> implies Theorem <ref> for strongly connected VASS MDPs. A proof is obtained by combining the following lemmata.
For every counter c such that (c)>0, every ε >0, every p∈ Q, and every σ∈Σ, there exists n_0 such that for all n≥ n_0 we have that ^σ_p ([c] ≥ n^1+ε )≤ kn^-ε where k is a constant depending only on .
A proof is in Appendix <ref>.
For 𝑇𝑎𝑟𝑔𝑒𝑡𝑠⊆() and m ∈, we use ^≤ m(𝑇𝑎𝑟𝑔𝑒𝑡𝑠) to denote the set of all computations π = p_0_0, p_1_1,… such that p_i_i ∈𝑇𝑎𝑟𝑔𝑒𝑡𝑠 for some i ≤ m.
For each counter c such that (c)=0 we have that _[c] ∈Ω(n^2) and n^2 is a lower estimate of [c]. Furthermore, for every ε>0 there exist a sequence of strategies σ_1,σ_2,…, a constant k, and p∈ Q such that for every 0<ε'<ε, we have that
lim_n →∞_p^σ_n(^≤ kn^2-ε'(𝑇𝑎𝑟𝑔𝑒𝑡𝑠_n)) = 1
where 𝑇𝑎𝑟𝑔𝑒𝑡𝑠_n = {q∈() |(c)≥ n^2-ε}.
A proof is in Appendix <ref>.
It remains to prove Theorem <ref> for VASS MDPs with DAG-like MEC decomposition.
Here, we proceed by analyzing the individual MECs one by one, transferring the output of the previous MEC to the next one. We start in a top MEC with all counters initialized to n. Here we can directly apply Theorem <ref> to determine which of the [c] have a tight estimate n and a lower estimate n^2, respectively. It follows from Lemma <ref> that all counters c such that n^2 is a lover estimate of [c] can be simultaneously pumped to n^2-ε with very high probability. However, this computation may decrease the counters c such that n is a tight estimate for [c]. To ensure that the value of these counters is still Ω(n) when entering the next MEC, we first divide the initial counter vector into two halves, each of size ⌊/2⌋, and then pump the counters c such that n^2 is a lower estimate for [c] to the value (⌊n/2⌋)^2-ε. We show that the length of this computation is at most quadratic. The value of the other counters stays at least ⌊n/2⌋. When analyzing the next MEC, we treat the counters previously pumped to quadratic values as “infinite” because they are sufficiently large so that they cannot prevent pumping additional counters to asymptotically quadratic values. Technically, this is implemented by modifying every counter update vector so that [c] = 0 for every “quadratic” counter c. A precise formulation of these observations and the corresponding proofs are given in Appendix <ref>.
We conjecture that the dichotomy of Theorem <ref> holds for all VASS MDPs, but we do not have a complete proof. If the MEC decomposition is not DAG-like, a careful analysis of computations revisiting the same MECs is required; such repeated visits may but do not have to enable additional asymptotic growth of [c].
§ ONE-DIMENSIONAL VASS MDPS
In this section, we give a full and effective classification of tight estimates of , [c], and [t] for one-dimensional VASS MDPs. More precisely, we prove the following theorem:
Let be a one-dimensional VASS MDP. We have the following:
* Let c be the only counter of . Then one of the following possibilities holds:
* There exists a type β=M such that [c] is unbounded for β.
* n is a tight estimate of [c] for every type.
* Let t be a transition of . Then one of the following possibilities holds:
* There exists a type β=M such that [t] is unbounded for β.
* There exists a type β such that (β)>0 and [t] is unbounded for β.
* There exists a type β=M such that n^2 is a tight estimate of [t] for β.
* The transition t occurs in some MEC M, n is a tight estimate of [t] for every type β containing the MEC M, and 0 is a tight estimate of [t] for every type β not containing the MEC M.
* The transition t does not occur in any MEC, and for every type β of length k we have that k is an upper estimate of [t] for β.
* One of the following possibilities holds:
* There exists a type β=M such that is unbounded for β.
* There exists a type β=M such that n^2 is a tight estimate of for β.
* n is a tight estimate of for every type.
It is decidable in polynomial time which of the above cases hold.
Note that some cases are mutually exclusive and some may hold simultaneously. Also recall that (β)=1 for every type β of length one, and (β) decays exponentially in the length of β. Hence, if a transition t does not occur in any MEC, there is a constant κ< 1 depending only on such that _p^σ[[t] ≥ i] ≤κ^i for every σ∈Σ and p∈().
For the rest of this section, we fix a one-dimensional VASS MDP = Q, (Q_n,Q_p),T,P and some linear ordering ⊑ on Q.
A proof of Theorem <ref> is obtained by analyzing bottom strongly connected components (BSCCs) in a Markov chain obtained from by “applying” some MD strategy σ (we use Σ_ to denote the class of all MD strategies for ). Recall that σ selects the same outgoing transition in every p ∈ Q_n whenever p is revisited, and hence we can “apply” σ to by removing the other outgoing transitions. The resulting Markov chain is denoted by _σ. Note that every BSCC of _σ can also be seen as an end component of . For a MEC M of , we write ⊆ M if all states and transitions of are included in M.
For every BSCC of _σ, let p_ be the least state of with respect to ⊑. Let _𝔹 be a function assigning to every infinite path π = p_0,_1,p_1,_2,… the sum ∑_i=1^ℓ_i if p_0 = p_ and ℓ≥ 1 is the least index such that p_ℓ = p_, otherwise _𝔹(π) = 0. Hence, _𝔹(π) is the change of the (only) counter c along π until p_ is revisited.
Let be a BSCC of _σ. We say that is
* increasing if ^σ_p_𝔹(_B)>0,
* decreasing if ^σ_p_𝔹(_B)<0,
* bounded-zero if ^σ_p_𝔹(_B)=0 and _p_𝔹^σ[_=0] =1,
* unbounded-zero if ^σ_p_𝔹(_B)=0 and _p_𝔹^σ[_=0] <1.
Note that the above definition does not depend on the concrete choice of ⊑. We prove the following results relating the existence of upper/lower estimates of , [c], and [t] to the existence of BSCCs with certain properties. More concretely,
* for [c], we show that
* [c] is unbounded for some type β=M if there exists an increasing BSCC of _σ for some σ∈Σ_ such that ⊆ M (Lemma <ref>);
* otherwise, n is a tight estimate of [c] for every type (Lemma <ref>)
* for , we show that
* is unbounded for some type β=M if there exists an increasing or bounded-zero BSCC of _σ for some σ∈Σ_ such that ⊆ M (Lemma <ref>, Lemma <ref>);
* otherwise, n^2 is an upper estimate of for every type β (Lemma <ref>);
* if there exists an unbounded-zero BSCC of _σ for some σ∈Σ_, then n^2 is a lower estimate of for β=M where ⊆ M (Lemma <ref>);
* if every BSCC of every _σ is decreasing, then _(n)∈Θ(n) (this follows from <cit.>), and hence n is a tight estimate of for every type (Observation <ref>);
* for [t], we distinguish two cases:
* If t is not contained in any MEC of , then for every type β of length k, the transition t cannot be executed more than k times along a arbitrary computation π where (π) = β.
* If t is contained in a MEC M of , then
* [t] is unbounded for β=M if there exist an increasing BSCC of _σ for some σ∈Σ_ such that ⊆ M (Lemma <ref>), or bounded-zero BSCC of _σ for some σ∈Σ_ such that contains t (Lemma <ref>);
* [t] is unbounded for every β=M_1,…, M_k such that M = M_i for some i and there exists an increasing BSCC of _σ for some σ∈Σ_ such that ⊆ M_j for some j ≤ i (Lemma <ref>);
* otherwise, n^2 is an upper estimate of [t] for every type (Lemma <ref>);
* if there is an unbounded-zero BSCC of _σ for some σ∈Σ_ such that contains t, then n^2 is a lower estimate of [t] for β=M (Lemma <ref>);
* if every BSCC of every _σ is decreasing, then [t]_(n)∈Θ(n) (this follows from <cit.>), and hence n is an upper estimate of [t] for every type (Observation <ref>).
The polynomial time bound of Theorem <ref> is then obtained by realizing the following: First, we need to decide the existence of an increasing BSCC of _σ for some σ∈Σ_. This can be done in polynomial time using the constraint system (I) of Fig. <ref> (Lemma <ref>). If no such increasing BSCC exists, we need to decide the existence of a bounded-zero BSCC, which can be achieved in polynomial time for a subclass of one-dimensional VASS MDPs where no increasing BSCC exists (Lemma <ref>). Then, if no bounded-zero BSCC exists, we need to decide the existence of an unbounded-zero BSCC, which can again be done in polynomial time using the constraint system (I) of Fig. <ref> (realize that any solution of (I) implies the existence of a BSCC that is either increasing, bounded-zero, or unbounded-zero).
Hence, the “algorithmic part” of Theorem <ref> is an easy consequence of the above observations, but there is one remarkable subtlety. Note that we need to decide the existence of a bounded-zero BSCC only for a subclass of one-dimensional VASS MDPs where no increasing BSCCs exist. This is actually crucial, because deciding the existence of a bounded-zero BSCC in general one-dimensional VASS MDPs is NP-complete (Lemma <ref>).
The main difficulties requiring novel insights are related to proving the observation about [c], stating that if there is no increasing BSCC of _σ for any σ∈Σ_, then n is an upper estimate of [c] for every type. A comparably difficult (and in fact closely related) task is to show that if there is no increasing or bounded-zero BSCC, then n^2 is an upper estimate of for every type. Note that here we need to analyze the behaviour of under all strategies (not just MD), and consider the notoriously difficult case when the long-run average change of the counter caused by applying the strategy is zero. Here we need to devise a suitable decomposition technique allowing for interpreting general strategies as “interleavings” of MD strategies and lifting the properties of MD strategies to general strategies. Furthermore, we need to devise techniques for reducing the problems of our interest to analyzing certain types of random walks that have already been studied in stochastic process theory. We discuss this more in the following subsection, and we refer to Appendix <ref> for a complete exposition of these results.
§.§ MD decomposition
As we already noted, one crucial observation behind Theorem <ref> is that if there is no increasing BSCC of _σ for any σ∈Σ_, then n is an upper estimate of [c] for every type. In this section, we sketch the main steps towards this result.
First, we show that every path in can be decomposed into “interweavings” of paths generated by MD strategies.
Let α=p_0,_1,…,p_k be a path. For every i ≤ k, we use α_..i=p_0,_1, …,p_i to denote the prefix of α of length i. We say that α is compatible with a MD strategy σ if σ(α_..i) = (p_i,_i+1,p_i+1) for all i<k such that p_i∈ Q_n. Furthermore, for every path β=q_0,_1,q_1,…,q_ℓ such that p_k=q_0, we define a path α∘β= p_0,_1,p_1,…,p_k,_1,q_1,…,q_ℓ.
Let be a VASS MDP, π_1,…, π_k ∈Σ_, and p_1,…,p_k ∈ Q. An of a path α = s_1,…,s_m under
π_1,…, π_k and p_1,…,p_k is a decomposition of α into finitely many paths α = γ_1^1 ∘⋯∘γ_1^k ∘ γ_2^1∘⋯∘γ_2^k ∘ ⋯ ∘ γ_ℓ^1∘⋯∘γ_ℓ^k satisfying the following conditions:
* for all i < ℓ and j≤ k, the last state of γ_i^j is the same as the first state of γ_i+1^j;
* for every j≤ k, γ_1^j ∘⋯∘γ_ℓ^j is a path that begins with p_j and is compatible with π_j.
Note that π_1,…, π_k and p_1,…,p_k are not necessarily pairwise different, and the length of γ_i^j can be zero. Also note that the same α may have several MD-decompositions.
Intuitively, an MD decomposition of α shows how to obtain α by repeatedly selecting zero or more transitions by π_1,…,π_k. The next lemma shows that for every VASS MDP , one can fix MD strategies π_1,…,π_k and states p_1,…,p_k such that every path α in has an MD-decomposition under π_1,…,π_k and p_1,…,p_k. Furthermore, such a decomposition is constructible online as α is read from left to right.
For every VASS MDP , there exist π_1,…,π_k ∈Σ_, p_1,…,p_k ∈ Q, and a function _ such that the following conditions are satisfied for every finite path α:
* _(α) returns an MD-decomposition of α under π_1,…,π_k and p_1,…,p_k.
* _(α)=_(α_..(α)-1) ∘ γ^1∘⋯∘γ^k, where exactly one of
γ^i has positive length (the i is called the mode of α).
* If the last state of α_..(α)-1 is probabilistic, then the mode of α does not depend on the last transition of α.
A proof of Lemma <ref> is in Appendix <ref>.
According to Lemma <ref>, every strategy σ for just performs a certain “interleaving” of the MD strategies π_1,…,π_k initiated in the states p_1,…,p_k. We aim to show that if every BSCC of every _π_j is non-increasing, then n is an upper estimate of [c] for every type. Since we do not have any control over the length of the individual γ_i^j occurring in MD-decompositions, we need to introduce another concept of extended VASS MDPs where the strategies π_1,…,π_k can be interleaved in “longer chunks”. Intuitively, an extended VASS MDP is obtained from by taking k copies of sharing the same counter. The j-th copy selects transitions according to π_j. At each round, only one π_j makes a move, where the j is selected by a special type of “pointing” strategy defined especially for extended MDPs.
Note that σ can be faithfully simulated in the extended VASS MDP by a pointing strategy that selects the indexes consistently with _. However, we can also construct another pointing strategy that simulates each π_j longer (i.e., “precomputes” the steps executed by π_j in the future) and thus “close cycles” in the BSCC visited by π_j. This computation can be seen as an interleaving of a finite number of independent random walks with non-positive expectations. Then, we use the optional stopping theorem to get an upper bound on the total expected number of “cycles”, which can then be used to obtain the desired upper estimate. We refer to Appendix <ref> for details.
§.§ A Note about Energy Games
One-dimensional VASS MDPs are closely related to energy games/MDPs <cit.>. An important open problem for energy games is the complexity of deciding the existence of a safe configuration where, for a sufficiently high energy amount, the responsible player can avoid decreasing the energy resource (counter) below zero. This problem is known to be in ∩, and a pseudopolynomial algorithm for the problem exists; however, it is still open whether the problem is in when the counter updates are encoded in binary. Our analysis shows that this problem is solvable in polynomial time for energy (i.e., one-dimensional VASS) MDPs such that there is no increasing SCC of _σ for any σ∈Σ_.
We say that a SCC of _σ is non-decreasing if does not contain any negative cycles. Note that every bounded-zero SCC is non-decreasing, and a increasing SCC may but does not have to be non-decreasing.
An energy MDP has a safe configuration iff there exists a non-decreasing SCC of _σ for some σ∈Σ_.
The “⇐” direction of Lemma <ref> is immediate, and the other direction can be proven using our MD decomposition technique, see Appendix <ref>.
Note that if there is no increasing SCC of _σ for any σ∈Σ_, then the existence of a non-decreasing SCC is equivalent to the existence of a bounded-zero SCC, and hence it can be decided in polynomial time (see the results presented above). However, for general energy MDPs, the best upper complexity bound for the existence of a non-decreasing SCC is ∩. Interestingly, a small modification of this problem already leads to -completeness, as demonstrated by the following lemma.
The problem whether there exists a non-decreasing SCC of _σ for some σ∈Σ_ such that contains a given state p ∈ Q is -complete.
A proof of Lemma <ref> is in Appendix <ref>.
§ CONCLUSIONS
We introduced new estimates for measuring the asymptotic complexity of probabilistic programs and their VASS abstractions. We demonstrated the advantages of these measures over the asymptotic analysis of expected values, and we have also shown that tight complexity estimates can be computed efficiently for certain subclasses of VASS MDPs.
A natural continuation of our work is extending the results achieved for one-dimensional VASS MDPs to the multi-dimensional case. In particular, an interesting open question is whether the polynomial asymptotic analysis for non-deterministic VASS presented in <cit.> can be generalized to VASS MDPs. Since the study of multi-dimensional VASS MDPs is notoriously difficult, a good starting point would be a complete understanding of VASS MDPs with two counters.
§ PROOFS FOR SECTION <REF>
§.§ Proof of Lemma <ref>
[<ref>]
Let be a (maximal) solution to the constraint system (I) and , be a (maximal) solution to the constraint system (II). Then, for each counter c we have that either (c)>0 or ∑_t ∈ T(t)_t(c)>0, and for each transition t = (p,,q)∈ T we have that
* if p∈ Q_n then either (q)-(p)+∑_i=1^d(i)(i)<0 or (t)>0;
* if p∈ Q_p then either
∑_t'=(p,',q') ∈(p)P(t')((q')-(p)+∑_i=1^d'(i)(i))< 0
or (t)>0.
Proof:
Let A=T_n∪ Q_p, where T_n=⋃_p∈ Q_n(p). For each a∈ A, let _a be a probability distribution on Q such that
* _(p,,q)(q)=1 for a=(p,,q)∈ T_n,
* _p(q)=∑_(p,,q)∈(p)∩(q) P((p,,q)) for a=p∈ Q_p,
* and _a(p)=0 else,
let _a be a probability distribution on Q such that
* _(p,,q)(p)=1 for a=(p,,q)∈ T_n,
* _p(p)=1 for a=p∈ Q_p,
* and _a(p)=0 else,
and let _a be defined as
* _(p,,q)= for a=(p,,q)∈ T_n,
* and _p=∑_(p,,q)∈(p) P((p,,q)) for a=p∈ Q_p.
Then we can rewrite the constraint systems as
[c]0.35
Constraint system (I'):
Find ' ∈ℤ^A such that
∑_a ∈ A'(a) _a ≥0⃗
' ≥0⃗
and for each p∈ Q
∑_a∈ A('(a)_a(p)-'(a)_a(p)) =0
[c]0.6
Constraint system (II'):
Find ∈ℤ^d,∈ℤ^Q such that
≥0⃗
≥0⃗
and for each a∈ A
∑_p∈ Q ((p)_a(p) - (p)(_a(p))) + ∑_i=1^d _a(i)(i)≤ 0
We recognize systems (I) and (I') as equivalent, and systems (II) and (II') as equivalent as per the following lemma.
If ',, is a solution to the rewritten constraint systems (I') and (II'), then ,, is a solution to the original constraint systems (I) and (II), where (t)='(t) for t∈ T_n, and ((p,,q))=P((p,,q))'(p) for (p,,q)∈ T∖ T_n. Similarly, if ,, is a solution to the original constraint systems (I) and (II), then ',, is a solution to the rewritten constraint systems (I') and (II'), where '(t)=(t) for t∈ T_n, and '(p)=∑_t∈(p)(t) for p∈ Q_p.
The first half (I): Let ' be a solution of (I'), we will show that is a solution to (I), where (t)='(t) for t∈ T_n, and ((p,,q))=P((p,,q))'(p) for (p,,q)∈ T∖ T_n.
It holds from (I') that
∑_a ∈ A'(a) _a
=
∑_t ∈ T_n'(t) _t + ∑_p ∈ Q_p'(p) _p
=
=
∑_t ∈ T_n'(t) _t + ∑_p ∈ Q_p'(p) (∑_(p,,q)∈(p) P((p,,q)))
=
∑_t ∈ T_n(t) _t + ∑_p ∈ Q_p∑_(p,,q)∈(p)'(p) P((p,,q))
=
∑_t ∈ T_n(t) _t + ∑_p ∈ Q_p∑_(p,,q)∈(p)((p,,q))
=
∑_t ∈ T(t) _t ≥0⃗
≥ 0 holds from both '≥ 0 and P(t)≥ 0 for each t∈ T∖ T_n.
For each p∈ Q it holds from (I')
∑_a∈ A('(a)_a(p)-'(a)_a(p))
=
=
∑_a∈ T_n('(a)_a(p)-'(a)_a(p)) +
∑_a∈ Q_r('(a)_a(p)-'(a)_a(p))
=
∑_t∈(p)∩ T_n'(t) - ∑_t∈(p)∩ T_n'(t) +
∑_a∈ Q_r'(a)(∑_t∈(a)∩(p) P(t))- '(p)
=
∑_t∈(p)∩ T_n(t) - ∑_t∈(p)∩ T_n(t) +
∑_a∈ Q_r∑_t∈(a)∩(p)'(a)P(t)- ∑_t∈(p) P(t)'(p)
=
∑_t∈(p)∩ T_n(t) - ∑_t∈(p)∩ T_n(t) +
∑_a∈ Q_r∑_t∈(a)∩(p)(t)- ∑_t∈(p) (t)
=
∑_t∈(p)(t) - ∑_t∈(p)(t) =0
And for each p∈ Q_p, t∈(p) it holds ∑_t'∈(p)(t')=∑_t'∈(p)P(t')'(p)='(p), therefore it holds (t)=P(t)'(p)= P(t)∑_t'∈(p)x(t').
Thus is a solution to (I).
The first half (II): Let , be a solution of (II'), we will show it is also a solution of (II).
For each a=(p,,q)∈ T_n it holds from (II')
∑_p'∈ Q ((p')(_a(p') - (p')(_a(p'))) + ∑_i=1^d _a(i)(i)
=
(q) - (p) + ∑_i=1^d (i)(i)
≤
0
And for each a=p∈ Q_p it holds from (II')
∑_q∈ Q ((q)_a(q) - (q)_a(q))) + ∑_i=1^d _a(i)(i)
=
=
∑_q∈ Q ((q)(∑_t∈(p)∩(q) P(t))) - (p) + ∑_i=1^d _a(i)(i)
=
∑_t∈(p)(q)P(t) - ∑_t∈(p)(p)P(t) + ∑_i=1^d _a(i)(i)
=
∑_t∈(p) P(t)((q) - (p)) + ∑_i=1^d ∑_t∈(p) P(t)_t(i)(i)
=
∑_t∈(p) P(t)((q) - (p)) + ∑_t∈(p)∑_i=1^d P(t)_t(i)(i)
=
∑_t∈(p) P(t)( (q) - (p) + ∑_i=1^d _t(i)(i) )
≤
0
Therefore , is a solution of (II).
The second half (I'): Let be a solution of (I), we will show that ' is a solution of (I'), where '(t)=(t) for t∈ T_n, and '(p)=∑_t∈(p)(t) for p∈ Q_p.
From (I) it holds for T_p=⋃_p∈ Q_p(p)
∑_t ∈ T(t) _t
=
=
∑_t ∈ T_n(t) _t + ∑_t ∈ T_p(t) _t
=
∑_t ∈ T_n'(t) _t + ∑_(p,,q) ∈ T_p P((p,,q))·(∑_t'∈(p)(t'))
=
∑_t ∈ T_n'(t) _t + ∑_(p,,q) ∈ T_p P((p,,q)) '(p)
=
∑_t ∈ T_n'(t) _t + ∑_p∈ Q_p∑_(p,,q) ∈(p) P((p,,q)) '(p)
=
∑_t ∈ T_n'(t)·_t + ∑_p∈ Q_p'(p)·_p
=
∑_a ∈ A'(a)·_a
≥0⃗
We get '≥ 0 trivially from (I).
It also holds
∑_a∈ A('(a)_a(p)-'(a)_a(p))
=
=
∑_t∈ T_n('(t)_t(p)-'(t)_t(p))+
∑_q∈ Q_p('(q)_q(p)-'(q)_q(p))
=
∑_t∈(p)∩ T_n'(t)
-∑_t∈(p)∩ T_n'(t)
+∑_q∈ Q_p'(q)(∑_t∈(q)∩(p) P(t))
-∑_q∈ Q_p∩{p }'(q)
=
∑_t∈(p)∩ T_n'(t)
-∑_t∈(p)∩ T_n'(t)
+∑_q∈ Q_p∑_t∈(q)∩(p) P(t)'(q)
-∑_q∈ Q_p∩{p }'(q)
=
∑_t∈(p)∩ T_n(t)
-∑_t∈(p)∩ T_n(t)
+∑_q∈ Q_p∑_t∈(q)∩(p) P(t)· (∑_t'∈(q)(t'))
-
-∑_q∈ Q_p∩{p }∑_t∈(q)(t)
=
∑_t∈(p)∩ T_n(t)
-∑_t∈(p)∩ T_n(t)
+∑_q∈ Q_p∑_t∈(q)∩(p)(t)
-∑_q∈ Q_p∩{p }∑_t∈(q)(t)
=
∑_t∈(p)∩ T_n(t)
-∑_t∈(p)∩ T_n(t)
+∑_t∈ T_p∩(p)(t)
-∑_q∈ Q_p∩{p }∑_t∈(q)(t)
If p∈ Q_n, then this becomes
∑_t∈(p)∩ T_n(t)
-∑_t∈(p)∩ T_n(t)
+∑_t∈ T_p∩(p)(t)
-∑_q∈ Q_p∩{p }∑_t∈(q)(t)
=
=
∑_t∈(p)∩ T_n(t)
-∑_t∈(p)∩ T_n(t)
+∑_t∈ T_p∩(p)(t)
-0
=
=
∑_t∈(p)(t)
-∑_t∈(p)∩ T_n(t)
=
∑_t∈(p)(t)
-∑_t∈(p)(t)
=
0
with the last line being from (I).
And if p∈ Q_p, then it becomes
∑_t∈(p)∩ T_n(t)
-0
+∑_t∈ T_p∩(p)(t)
-∑_q∈ Q_p∩{p }∑_t∈(q)(t)
=
=
∑_t∈(p)∩ T_n(t)
+∑_t∈ T_p∩(p)(t)
-∑_t∈(p)(t)
=
=
∑_t∈(p)(t)
-∑_t∈(p)(t)
=
0
with the last line being from (I). Therefore ' is a solution of (I')
The second half (II'): Let , be a solution of (II) we will show that , is also a solution of (II').
From (II) we have for each t=(p,,q)∈ T_n that
(q)-(p)+∑_i=1^d(i)(i)
=
=
(q)·_t(q)-(p)·_t(p)+∑_i=1^d_t(i)(i)
=
= ∑_r∈ Q (r)·_t(r)-(r)·_t(r))+∑_i=1^d_t(i)(i)
≤ 0
Where we used that _t(r)=0 for every r≠ q, and _t(r)=0 for every r≠ p.
Additionally, From (II) we also have for each p∈ Q_p that
∑_t= (p,,q) ∈(p)P(t)((q)-(p)+∑_i=1^d_t(i)(i))
=
=
-(p)+∑_t= (p,,q) ∈(p)P(t)((q)+∑_i=1^d_t(i)(i))
=
=
-(p)+∑_q∈ Q∑_t∈(p)∩(q)P(t)((q)+∑_i=1^d_t(i)(i))
=
=
-(p)+∑_q∈ Q∑_t ∈(p)∩(q)P(t)(q)+ ∑_t∈(p)P(t) ( ∑_i=1^d_t(i)(i))
=
=
-(p)+∑_q∈ Q(q) ∑_t∈(p)∩(q)P(t) + ∑_i=1^d(i)·(∑_t ∈(p)P(t) _t(i))
=
=
-(p)·_p(p)+∑_q∈ Q(q)·_p(q) + ∑_i=1^d(i)·_p (i)
=
=
∑_q∈ Q((q)·_p(q)-(q)·_p(q)) + ∑_i=1^d(i)·_p (i)
≤ 0
Therefore , is also a solution to (II')
We will now rewrite the constraint systems (I') and (II') into matrix form. Let D be a A×{1,…, d } matrix whose columns are indexed by elements of A, and rows indexed by counters c∈{1,…,d}, such that the column D(a)=_a. And let F be a A× Q matrix, whose columns are indexed by elements of A, and rows are indexed by states p∈ Q, such that the column F(a) is equal to the vector such that (p)=_a(p)-_a(p) for each p∈ Q.
Then we can further rewrite the systems (I') and (II') as follows:
[c]0.45
constraint system (I'):
Find ' ∈ℤ^A such that
D ' ≥0⃗
' ≥0⃗
F ' = 0⃗
[c]0.45
constraint system (II'):
Find ∈ℤ^d,∈ℤ^Q with
≥0⃗
≥0⃗
F^T + D^T ≤0⃗
The rest then follows exactly the same as the the proof of the dichotomy on non-stochastic VASS in <cit.> (Lemma 4), as the only difference between our systems and the ones used in <cit.> is that the matrix F now also may contain rational numbers other than -1,0,1. The proof in <cit.> is already made over ℤ, and the only additional requirement it needs is that each column of F sums up to 0, which is satisfied also by our F.
§.§ The proof from <cit.> (Lemma 4)
For the sake of completeness we include a copy of the proof from <cit.> (Lemma 4). All credit for the proof in this subsection goes to the author of <cit.>. The only changes we made was to rename some variables.
The proof will be obtained by two applications of Farkas' Lemma.
We will employ the following version of Farkas' Lemma, which states that for matrices A,C and vectors b,d, exactly one of the following statements is true:
[c]0.4
there exists x with
[ Ax ≥ b; Cx = d ]
[c]0.5
there exist y,z with
[ y ≥ 0; A^T y + C^T z = 0; b^T y + d^T z > 0 ]
We now consider the constraint systems (_) and (_) stated below.
Both constraint systems are parameterized by a ∈ A (we note that only Equations (<ref>) and (<ref>) are parameterized by a).
[c]0.4
constraint system (_):
there exists ∈ℤ^A with
U ≥ 0
≥ 0
= 0
() ≥ 1
[c]0.52
constraint system (_):
there exist
∈ℤ^,∈ℤ^() with
≥ 0
≥ 0
^T + ^T ≤ 0 with < 0 in line
We recognize constraint system (_) as the dual of constraint system (_)
in the following Lemma:
Exactly one of the constraint systems (_) and (_) has a solution.
We fix some a∈ A.
We denote by _a ∈ℤ^A the vector with _a(a') = 1, if a' = a, and _a(a') = 0, otherwise.
Using this notation we rewrite (_) to the equivalent constraint system (_'):
[r]0.3
constraint system (_'):
[r]0.3
[ ; ] ≥ [ 0; _ ]
= 0
Using Farkas' Lemma, we see that either (_') is satisfiable or the following constraint system (_') is satisfiable:
[r]0.47
constraint system (_'):
[ ; k ] ≥ 0
[ ; ]^T
[ ; k ] + ^T = 0
[ 0; _ ]^T
[ ; k ] + 0^T > 0
[c]0.45
constraint system (_') simplified:
≥ 0
k ≥ 0
^T + k + ^T = 0
k() > 0
We observe that solutions of constraint system (_') are invariant under shifts of , i.e, if , k, is a solution, then , k, + c · is also a solution for all c ∈ℤ (because elements of every row of ^T sum up to 0).
Hence, we can force to be non-negative.
We recognize that constraint systems (_') and (_) are equivalent.
We now consider the constraint systems (_) and (_) stated below.
Both constraint systems are parameterized by a counter (we note that only Equations (<ref>) and (<ref>) are parameterized by ).
[c]0.42
constraint system (_):
there exists ∈ℤ^A with
≥ 0 with ≥ 1 in line
≥ 0
= 0
[c]0.5
constraint system (_):
there exist
∈ℤ^,∈ℤ^() with
≥ 0
≥ 0
^T + ^T ≤ 0
() > 0
We recognize constraint system (_) as the dual of constraint system (_) in the following Lemma:
Exactly one of the constraint systems (_) and (_) has a solution.
We fix some counter .
We denote by _∈ℤ^ the vector with _(') = 1, if ' =, and _(') = 0, otherwise.
Using this notation we rewrite (_) to the equivalent constraint system (_'):
[r]0.3
constraint system (_'):
[r]0.3
[ ; ] ≥ [ _; 0 ]
= 0
Using Farkas' Lemma, we see that either (_') is satisfiable or the following constraint system (_') is satisfiable:
[r]0.47
constraint system (_'):
[ ; k ] ≥ 0
[ ; ]^T
[ ; k ] + ^T = 0
[ _; 0 ]^T
[ ; k ] + 0^T > 0
[c]0.45
constraint system (_') simplified:
≥ 0
k ≥ 0
^T + k + ^T = 0
() > 0
We observe that solutions of constraint system (_') are invariant under shifts of , i.e, if , k, is a solution, then , k, + c · is also a solution for all c ∈ℤ (because elements of every row of ^T sum up to 0).
Hence, we can force to be non-negative.
We recognize that constraint systems (_') and (_) are equivalent.
§.§ Proof of Lemma <ref>
[<ref>]
For every counter c such that (c)>0, every ε >0, every p∈ Q, and every σ∈Σ, there exists n_0 such that for all n≥ n_0 we have that ^σ_p ([c] ≥ n^1+ε )≤ kn^-ε where k is a constant depending only on .
Let P_1V_1,P_2V_2,… be the random variables encoding the computation under σ from p (i.e. P_iV_i represents the configuration at i-th step of the computation). And let R_1,R_2,… represent the value of rank at i-th step (i.e. R_i=rank(P_iV_i)). Then R_1,R_2,… is a supermartingale.
R_1,R_2,… is a supermartingale.
One can express R_i+1=R_i+X_i+1, where X_i+1=R_i+1-R_i is the change of rank in the (i+1)-st step. Then it holds ^σ_pn⃗(R_i+1|R_i)=^σ_pn⃗(R_i|R_i)+^σ_pn⃗(X_i+1|R_i)=R_i+^σ_pn⃗(X_i+1|R_i). We want to show that ^σ_pn⃗(X_i+1|R_i)≤ 0. Let T_i+1 be random variable representing the transition taken at (i+1)-st step. Then ^σ_pn⃗(X_i+1|R_i)= ∑_t∈ T_p^σ(T_i+1=t|R_i)·(t) where (t) represents the change of rank under transition t.
Let T_n=⋃_p∈ Q_n(p) and T_p=⋃_p∈ Q_p(p) , then we can write
^σ_pn⃗(X_i+1|R_i)=
∑_t∈ T_p_p^σ(T_i+1=t|R_i)·(t)+∑_t∈ T_n_p^σ(T_i+1=t|R_i)·(t).
Since for each t=(p,,q)∈ T it holds (t)=(q)-(p)+∑_i=1^d (i)(i), for each t∈ T_n it holds (t)≤ 0, and for each p∈ Q_n, it holds ∑_t∈(p) P(t)(t) ≤ 0. Therefore we can write
∑_t∈ T_p_p^σ(T_i+1=t|R_i)·(t)
=
∑_p∈ Q_p (_p^σ(P_i=p) ∑_t∈(p) P(t)·(t))
≤ 0
and
∑_t∈ T_n_p^σ(T_i+1=t|R_i)·(t)≤ 0
Thus E(X_i+1|R_i)≤ 0
Now let us consider the stopping rule τ that stops when either any counter reaches 0, or any counter c with (c)>0 becomes larger then n^1+ϵ for the first time. (i.e. either V_τ(c')<0 for any c'∈{1,…,d }, or V_τ(c)≥ n^1+ϵ for c with (c)>0). Then for all i, it holds that R_min(i,τ)≤ max_p∈ Q(p) + max_c∈{1,…,d}(c)· d · (n^1+ϵ+u), where u is the maximal increase of a counter in a single transition. Therefore we can apply optional stopping theorem to obtain:
max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n ≥ E(R_1)≥ E(R_τ)≥ pX_n^1+ϵ+(1-p)X_0
where X_n^1+ϵ represents the minimal possible value of R_τ if any counter c with (c)>0 has R_τ(c)≥ n^1+ϵ, p is the probability of any such counter being at least n^1+ϵ upon stopping, and X_0 represents the minimal value of R_τ if no such counter reached n^1+ϵ. We can simplify this as
max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n ≥ pX_n^1+ϵ+(1-p)X_0
max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-(1-p)· X_0 ≥ pX_n^1+ϵ
max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-(1-p)max_c∈{1,…,d}(c) · d· u ≥ pX_n^1+ϵ
max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u +p· max_c∈{1,…,d}(c) · d· u ≥ pX_n^1+ϵ
max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u ≥ pX_n^1+ϵ-p · max_c∈{1,…,d}(c) · d· u
max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u ≥ p(X_n^1+ϵ- max_c∈{1,…,d}(c) · d· u)
max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u /X_n^1+ϵ- max_c∈{1,…,d}(c) · d· u≥ p
max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u /n^1+ϵ- max_c∈{1,…,d}(c) · d· u≥ p
As for all sufficiently large n it holds 0.5· n^1+ϵ≤ n^1+ϵ- max_c∈{1,…,d}(c) · d· u we have
max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u /0.5· n^1+ϵ≥ p
max_p∈ Q(p)-max_c∈{1,…,d}(c) · d· u/0.5· n^1+ϵ + max_c∈{1,…,d}(c)· d· n/0.5· n^1+ϵ≥ p
max_p∈ Q(p)-max_c∈{1,…,d}(c) · d· u/0.5· n^1+ϵ + max_c∈{1,…,d}(c)· d/0.5· n^ϵ≥ p
Also as n^1+ϵ≥ n^ϵ we have
max_p∈ Q(p)-max_c∈{1,…,d}(c) · d· u/0.5· n^ϵ + max_c∈{1,…,d}(c)· d/0.5· n^ϵ≥ p
max_p∈ Q 2·(p)-max_c∈{1,…,d} 2·(c) · d· u + max_c∈{1,…,d} 2·(c)· d/ n^ϵ≥ p
As k=max_p∈ Q 2·(p)-max_c∈{1,…,d} 2·(c) · d· u + max_c∈{1,…,d} 2·(c)· d is a constant dependent only on the VASS MDP, it holds for each counter c with (c)>0 and for all sufficiently large n that ^σ_p ([c] ≥ n^1+ϵ )≤ p ≤ kn^-ϵ.
§.§ Proof of Lemma <ref>
[<ref>]
For each counter c such that (c)=0 we have that _[c] ∈Ω(n^2) and n^2 is a lower estimate of [c]. Furthermore, for every ε>0 there exist a sequence of strategies σ_1,σ_2,…, a constant k, and p∈ Q such that for every 0<ε'<ε, we have that
lim_n →∞_p^σ_n(^≤ kn^2-ε'(𝑇𝑎𝑟𝑔𝑒𝑡_n)) = 1
where 𝑇𝑎𝑟𝑔𝑒𝑡_n = {q∈() |(c)≥ n^2-ε}.
Let _ be the VASS MDP induced by transitions t with (t)>0.
In _, Each pair of states p,q∈ Q is either a part of the same MEC of _, or p is not reachable from q and vice-versa, in _.
This follows directly from satisfying kirhoff laws.
Therefore _ can be decomposed into multiple MECs, and there are no transitions in the MEC decomposition of _. Let these MECs be B_1,…,B_k, and let _1,…,_k be the restriction of to the transitions of B_1,…, B_k. (i.e. _i(t)=(t) if B_i contains t, and otherwise _i(t)=0).
For each 1≤ i≤ k, let _i=_i/∑_t∈ T_i(t) be the normalized vector of _i, and let σ_i be a Markovian strategy for B_i such that σ_i(p)(t)=_i(t)/∑_t∈(p)_i(t) for t such that ∑_t∈(p)_i(t)>0, and undefined otherwise. We will use M_i to represent the Markov chain obtained by applying σ_i to B_i.
Let m_i∈ℤ^Q be such that m_i(p)=∑_t∈(p)_i(t). Then m_i is an invariant distribution on M_i. Also the expected effect of a single computational step in M_i taken from distribution m_i is equal to ∑_(p,,q)∈ T_i((p,,q)).
Let us consider a single computation step in M_i taken from distribution m_i, and let X be resulting distribution on transitions during this step. Then for each transition t=(p,,q)∈ T it holds:
* if p∈ Q_p then X(t)=P(t)· m_i(p) =P(t)·∑_t∈(p)_i(t)=_i(t).
* if p∈ Q_n then X(t)=
σ_i(p)(t)· m_i(p)
=
_i(t)/∑_t∈(p)_i(t)·∑_t∈(p)_i(t)
=
_i(t).
And as the next distribution m_i' on states can be expressed as m_i'(p)=∑_(q,,p)∈ T X((q,,p)) = ∑_(q,,p)∈ T_i((q,,p))=∑_t∈(p)_i(t)=m_i(p), m_i is an invariant distribution on M_i.
Let _1,…, _k be the expected update vectors per single computational step generated by the invariants in M_1,…, M_k. Then from x being a solution to (I), we get that for a_i= ∑_t∈ T x_i(t) it holds
∑_i=1^k a_i·_i ≥0⃗
as well as
(∑_i=1^k a_i·_i)(c) > 0
for c with (c)=0.
Therefore we can use the results of <cit.>, which states that if there exists a sequence of Markov chains M_1,…,M_k with their respective increments _1,…,_k, and positive integer coefficients a_1,…,a_k such that ∑_i=1^k a_i_i≥0⃗, then there exists a function L(n)∈Θ(n), a state p∈ Q, and sequence of strategies σ_1,σ_2,… such that the probability X_n of the computation from p under σ_n never decreasing at each of the first L^2-ϵ'(n) steps any counter below b_1n_pn⃗^σ_n(C_i^n(c))-b_2n, where b_1,b_2 are some constants and C_i^n is the random variable representing the counter vector after i steps when computing form p under σ_n, satisfies lim_n→∞ X_n = 1. And furthermore, for each counter c with (∑_i=1^k a_i_i)(c) > 0 it holds that _pn⃗^σ_n(C_i^n(c))∈Ω(i).
Therefore, with probability at least X_n we reach a configuration q with each counter c such that (c)=0 having (c)≥ n^2-ϵ within L^2-ϵ'(n)≤ kn^2-ϵ' steps, and it holds lim_n→∞ X_n=1.
§.§ VASS MDP with DAG-like MEC Decomposition
We formalize and prove the idea sketched at the end of Section <ref>.
Let be a DAG-like VASS MDP with d counters and a DAG-like MEC decomposition, and β=M_1,…, M_k be it's type. Let _0,_1,…,_k∈{n,∞}^d, and let M_i^_i-1 be the MEC obtained by taking M_i and changing the effect of every transition to ' such that for each c∈{1,…,d }, '(c)=(c) if _i-1(c)=n, and '(c)=0 if _i-1(c)=∞. Furthermore, let the following hold for each counter c∈{1,…,d } and 1≤ i≤ k
* _0(c)=n,
* _i(c)=n if both _i-1(c)=n and n is a tight estimate of c in M_i^_i-1,
* _i(c)=∞ if either _i-1(c)=∞ or n^2 is a lower estimate of c in M_i^_i-1.
Then for each ϵ>0, there exists a sequence of strategies σ_1^1,σ_1^2,…,σ_1^k,σ_2^1,…,σ_2^k,σ_3^1,…, such that for each 1≤ i≤ k, and each n, the computation under σ_n^i initiated in some state of M_i with initial counter vector such that for each c∈{1,…,d } it holds
* (c)≥⌊n/2^i⌋ if _i-1(c)=n,
* (c)≥⌊(n/2^i)^2-ϵ_i-1/2^i⌋ if _i-1(c)=∞,
reaches with probability X_n a configuration of M_i with counter vector such that for each c∈{1,…,d } it holds
* (c)≥⌊n/2^i+1⌋ if _i(c)=n,
* (c)≥⌊(n/2^i+1)^2-ϵ_i/2^i+1⌋ if _i(c)=∞,
where ϵ_i=iϵ/k, and it holds lim_n→∞ X_n=1.
Furthermore, for each counter c∈{1,…,d }, if _k(c)=n then n is a tight estimate of [c] for type β, and if _k(c)=∞ then n^2 is a lower estimate of [c] for type β.
Proof by induction on k.
Base case of k=1 holds from Lemma <ref>, and the second part holds from Lemma <ref>.
Assume now the Lemma holds for the type M_1,…, M_i-1. Let σ_1,σ_2,… and p∈ Q be from the Lemma <ref> for M_i^_i-1 and for ϵ_i. Then from induction assumption, there are strategies such that when the computation reaches M_i the counters vector is with probability Y_i such that lim_n→∞ Y_n=1 and
* (c)≥⌊n/2^i⌋ if _i-1(c)=n,
* (c)≥⌊(n/2^i)^2-ϵ_i-1/2^i⌋ if _i-1(c)=∞.
Now let us consider the following: upon reaching M_i, we divide the counters vector into two halves, each of size ⌊/2⌋, and then we perform the computation of σ_⌊n/2^i+1⌋ on the first half for ln^2-(i+0.5)ϵ/k steps. (i.e., if the effect on any counter c is less then -⌊/2⌋(c), then the computation stops). Then from Lemma <ref> we will with probability X_n reach a configuration with all counters c such that _i-1(c)=n and _i(c)=∞ being at least (c)≥ (⌊n/2^i+1⌋)^2-ϵ_i, such that lim_n→∞ X_n=1. As the length of this computation is only ln^2-(i+0.5)ϵ/k we cannot decrease any "deleted" counter c with _i-1(c)=∞ by more then an^2-(i+0.5)ϵ/k for some constant a. Therefore for all sufficiently large n, the computation cannot terminate due to such counter being depleted. And since the second half of is untouched, we still have for each counter c with _i-1(c)=∞ at least ⌊(n/2^i)^2-ϵ_i-1/2^i+1⌋ and for each counter c with _i-1(c)=n at least ⌊n/2^i+1⌋.
Therefore with probability at least X_nY_n the computation ends in configuration q of M_i such that for each counter c
* (c)≥⌊n/2^i+1⌋ if _i(c)=n,
* (c)≥⌊(n/2^i+1)^2-ϵ_i/2^i+1⌋ if _i(c)=∞
And it holds lim_n→∞ X_nY_n=1. Thus n^2 is a lower estimate of [c] for type M_1,…,M_i for each c with _i(c)=∞.
For the second part of the lemma, let σ be some strategy. Then for every counter c with _i(c)=n, we have from the induction assumption that the probability, of the strategy σ started in initial configuration pn⃗, reaching M_i along type M_1,…,M_i with c being at least n^1+ϵ' for some 0< ϵ', is at most Z_n, where lim sup_n →∞ Z_n=0. Let σ'be a strategy, which for an initial state q such that q is a state of M_1, computes as σ after a path from p to q. Then from Lemma <ref> we have for each 0<ϵ̂ that
^σ'_q n^1+ϵ'(_M_i^_i-1[c] ≥ n^1+ϵ̂ )
=
^σ'_q n^1+ϵ'(_M_i^_i-1[c] ≥ (n^1+ϵ')^log_n^1+ϵ'n^1+ϵ̂ )
≤
kn^1-log_n^1+ϵ'n^1+ϵ̂
Let y=1-log_n^1+ϵ'n^1+ϵ̂, note that for ϵ'<ϵ̂ it holds y<0. Also let R_q=^σ_pn⃗({α| the first state of M_i in α is q }). Then we can write for each ϵ'<ϵ̂<ϵ
^σ_p n⃗([c] ≥ n^1+ϵ|=M_1,…,M_i )
≤
≤^σ_p n⃗([c] ≥ n^1+ϵ'|=M_1,…,M_i-1 )+
∑_q∈ Q R_q
^σ'_q n^1+ϵ'(_M_i^_i-1[c] ≥ n^1+ϵ̂ )
≤
Z_n+kn^y
And since lim_n→∞ Z_n+kn^y=0, and ϵ',ϵ̂ can be arbitrarily small, we can find values for them for arbitrary ϵ>0. Thus n is a tight estimate of [c] for type M_1,…,M_i.
§ PROOFS FOR SECTION <REF>
Given a one-dimesional VASS MDP, deciding existence of an increasing BSCC of _σ for some σ∈Σ_ can be done in .
If such BSCC exists, then it gives us a solution with ∑_(p,,q)∈ T((p,,q))(c) > 0 for (I). The solution is such that if is the invariant distribution on states of 𝔹 under σ, then for each transition (p,,q) contained in 𝔹, ((p,,q))=(p) if p∈ Q_n is a non-deterministic state, and ((p,,q))=(p)P((p,,q)) if p∈ Q_p is a probabilistic state, while (t)=0 for each t that is not contained in 𝔹. And every solution of (I) such that ∑_(p,,q)∈ T((p,,q))(c) > 0 can be used to extract a strategy with expected positive effect on the counter (Appendix <ref>: Lemma <ref>). But this is only possible if there exists an increasing BSCC of _σ for some σ∈Σ_, as these are the extremal values of any strategy.[Here we rely on well-known results about finite-state MDPs <cit.>.]
Given a one-dimensional VASS MDP, if there exists an increasing BSCC of _σ for some σ∈Σ_, then [c] and are unbounded in type M such that 𝔹⊆ M. Furthermore, let M[t] be the MEC containing the transition t. If M[t] exists and 𝔹⊆ M[t], then [t] is unbounded for type M[t]. Additionally, [t] is also unbounded for each type β=M_1,…,M_k such that there exist j≤ i such that M_i=M[t] and 𝔹⊆ M_j.
The computation under σ from any state of 𝔹 has a tendency to increase the counter, and as n goes towards ∞ the probability of the computation terminating goes to 0.[For formal proof see e.g. <cit.> (Lemma 6)] Therefore both [c] and are unbounded for type M with 𝔹⊆ M. Furthermore, if 𝔹⊆ M[t], then t can be iterated infinitely often with high probability by periodically “deviating” from σ by temporarily switching to some other strategy which never leaves M[t] and has positive chance of using t. Clearly this can be done in such a way that the overall strategy still has the tendency to increase the counter. Therefore in such case [t] is unbounded for type M[t]. The last part of the theorem comes from the fact that we can first pump the counter in M_j to an arbitrarily large value, before moving to M[t] where we then can iterate any strategy on M[t] that has positive chance of using t.
Given a one-dimensional VASS MDP, if there exists an unbounded-zero BSCC of _σ for some σ∈Σ_, then _∈Ω(n^2) and n^2 is a lower estimate of for type M such that 𝔹⊆ M.
Furthermore, if 𝔹 contains the transition t, then also _[t]∈Ω(n^2) and n^2 is a lower estimate of [t] for type M such that 𝔹⊆ M.
This follows follows directly from the results of <cit.> (Section 3.3).
Given a one-dimensional VASS MDP, if there exists an bounded-zero BSCC of _σ for some σ∈Σ_, then is unbounded for type M such that 𝔹⊆ M. Furthermore, if 𝔹 contains t then also [t] is unbounded for type M such that 𝔹⊆ M.
Since 𝔹 is bounded-zero, it must hold that there is no non-zero cycle in 𝔹. Therefore the effect of every path of 𝔹 is bounded by some constant. As such, the computation under σ started from any state of 𝔹 can never terminate if the initial counter value is sufficiently large.
It is decidable in polynomial time if a one-dimensional VASS MDP , that contains no increasing BSCC of _σ for any σ∈Σ_, whether contains a bounded-zero BSCC 𝔹 of _σ for some σ∈Σ_.
Since there is no class increasing BSCC of an MD strategy, there can be no solution to (I) with ∑_(p,,q)∈ T((p,,q))(c) > 0 as any such solution can be used to extract a strategy with expected positive effect on the counter (Appendix <ref>: Lemma <ref>). Therefore from Lemma <ref>, we have that there exists a ranking function rank, defined by a maximal solution of (II) (see Section <ref>), such that the effect of any transition from a nondeterministic state has non-positive effect on rank, and the expected effect of a single computational step taken from a probabilistic state is non-positive on rank. Furthermore, rank depends on the counter value. Therefore any BSCC which contains any transition whose effect on rank can be non-zero cannot be bounded-zero. If such transition were from non-deterministic state, then it could only decrease the rank, and as rank can never be increased in expectation, this would lead to a positive chance of a cycle with negative effect on rank and thus also on the counter. And if the transition were from a probabilistic state, then as the expectation is non-positive, there would be a non-zero probability of a transition with negative effect on rank being chosen. Therefore any bounded-zero BSCC can contain only those transitions that never change rank.
On the other hand, any BSCC 𝔹 of _σ for some σ∈Σ_, which contains only transitions that never change rank must be bounded-zero, as that means the effect of any cycle in 𝔹 must be 0 (as any non-zero cycle would have necessarily changed rank in at least one of its transitions).
Therefore it is sufficient to decide whether there exists a BSCC 𝔹 of _σ for some σ∈Σ_, containing only those transitions that do not change rank. We can do this by analyzing each MEC one by one. For each MEC we first compute rank using the system (II) (see Section <ref>), then proceed by first removing all transitions that can change rank, and then iteratively removing non-deterministic states that do not have any outgoing transition left, and probabilistic states for which we removed any outgoing transition, until we reach a fixed point. If there exists a bounded-zero BSCC 𝔹 of _σ for some σ∈Σ_, then all transitions of 𝔹 will remain in the fixed point as they can never be removed. On the other hand once we reach the fixed point, it holds that for any state p that is left there either exists a “safe” outgoing transition if p∈ Q_n or all outgoing transitions are “safe” if p∈ Q_p, and these “safe” transitions end in a “safe” state. With state being “safe” if it is left in fixed point, and transitions being “safe” if their effect on rank is 0. Thus we can simply select any MD strategy on the states/transitions that are left and it must have a bounded-zero BSCC. And if the fixed point is empty, then there can be no bounded-zero BSCC 𝔹 of _σ for some σ∈Σ_. Clearly this can be done in polynomial time.
One might ask whether the restriction on one-dimensional VASS MDPs not containing an increasing BSCC of _σ for some σ∈Σ_, is necessary in Lemma <ref>. The answer is yes, as Lemma <ref> shows that deciding existence of a bounded-zero BSCC of _σ for some σ∈Σ_, is -complete for general one-dimensional VASS MDPs.
Given a one-dimensional VASS MDP , if there is no increasing or bounded-zero BSCC of A_σ for any σ∈Σ_, then n^2 is an upper estimate of for every type.
Given a one-dimensional VASS MDP , if there is no increasing BSCC of A_σ for any σ∈Σ_, then n is an upper estimate of [c] for every type.
To prove these two Lemmata, we need to consider a certain overapproximation of , which in some sense is in multiple states at the same time. This overapproximation will allow us to view any computation on as if with very high probability, the computation was at each step choosing one of finitely many (depending only on ) random walks/cycles, whose effects correspond to their corresponding BSCC (increasing, bounded-zero, unbounded-zero, decreasing). That is if there is no increasing or bounded-zero BSCC of A_σ for any σ∈Σ_, then the expected effect of these random walks can only either be negative (decreasing), or 0 but with non-zero variance (unbounded-zero). This then allows us to provide some structure to the VASS MDP which will then allow us to prove these lemmata. A key concept to defining this overapproximation is that of an MD-decomposition, which roughly states that for each path on a VASS MDP, we can color each transition using one of finitely many colors, such that the sub-path corresponding to each color is a path under some MD strategy associated with that color. We then show that we can color any path on using a finite set of colors, and that this coloring can be made “on-line”, that is a color can be assigned uniquely (in some sense) to each transition at the time this transition is taken in the computation.
[<ref>]
For every VASS MDP , there exist π_1,…,π_k ∈Σ_, p_1,…,p_k ∈ Q, and a function _ such that the following conditions are satisfied for every finite path α:
* _(α) returns an MD-decomposition of α under π_1,…,π_k and p_1,…,p_k.
* _(α)=_(α_..(α)-1) ∘ γ^1∘⋯∘γ^k, where exactly one of
γ^i has positive length (the i is called the mode of α).
* If the last state of α_..(α)-1 is probabilistic, then the mode of α does not depend on the last transition of α.
Proof by induction on the number of outgoing transitions from non-deterministic states in .
Base case: Every non-deterministic state has exactly one outgoing transition. Then there exists only a single strategy π and it is MD. Therefore let k=|Q|, π_1=…=π_k=π, and p_1,…,p_k be all the distinct states of . Then let _(ϵ)=ϵ, and for a path α with initial state p_i let _(α)=_(α_..(α)-1)∘γ^1∘γ^2∘…∘γ^k such that γ^j=p_j for j≠ i and γ^i=q,,r where α=p_i,…,q,,r.
Induction step: Assume every VASS MDP ' with less then i outgoing transitions from non-deterministic states satisfies the lemma, and let have exactly i outgoing transitions from non-deterministic states.
If contains no non-deterministic state p∈ Q_n with |(p)|≥ 2 then base case applies. Otherwise, let us fix some state p∈ Q_n with |(p)|≥ 2, and let t_r,t_g∈(p) be such that t_r≠ t_g. Let _r and _g be VASS MDPs obtained from by removing t_g and t_r, respectively.
For any path α, we define a red/green-decomposition of α on as α = g_1∘ r_1∘ g_2∘ r_2∘…∘ g_ℓ∘ r_ℓ (all of positive length except potentially g_1 and r_ℓ) satisfying the following:
* for every 1≤ i<ℓ, the last state of g_i is p;
* if (r_ℓ)>0 then the last state of g_ℓ is p;
* for every 1≤ i<ℓ, the last state of r_i is p;
* for every 1<i ≤ℓ, the first state of g_i is p and the first transition of g_i is t_g;
* for every 1≤ i ≤ℓ, the first state of r_i is p and the first transition of r_i is t_r;
* g_α=g_1 ∘⋯∘ g_ℓ is a path on _g.
* r_α=r_1 ∘⋯∘ r_ℓ is a path on _r.
Clearly every path on has a unique red/green-decomposition that can be computed online.
Now let __g and __r, π_1^g,…, π_k_g^g and π_1^r,…,π_k_r^r, p_1^g,…, p_k_g^g and p_1^r,…,p_k_r^r, k_g and k_r be the functions, MD strategies, states and k values for _g and _r, respectively. Note that their existence follows from the induction assumption. Then let k=k_g+k_r, π_1=π_1^g,π_2=π_2^g,…, π_k_g=π_k_g^g,π_k_g+1=π_1^r,π_k_g+2=π_2^r,…,π_k_g+k_r=π_k_r^r, p_1=p_1^g,p_2=p_2^g,…, p_k_g=p_k_g^g,p_k_g+1=p_1^r,p_k_g+2=p_2^r,…,p_k_g+k_r=p_k_r^r. We now define _(ϵ)=ϵ and _(α)=_(α_..(α)-1)∘γ^1∘γ^2∘…∘γ^k such that if t=(q,,r) is the last transition of α, and α = g_1∘ r_1∘ g_2∘ r_2… g_ℓ∘ r_ℓ is the red/green-decomposition of α on , it holds:
* if (r_ℓ)=0, then let i be the __g-mode of g_α=g_1 ∘⋯∘ g_ℓ. Then we put γ^j=p_j for each j≠ i, and γ^i=q,,r;
* if (r_ℓ)>0, then let i be the __r-mode of r_α=r_1 ∘⋯∘ r_ℓ. Then we put γ^j=p_j for each j≠ k_g+i, and γ^k_g+i=q,,r;
From Lemma <ref> we can view any strategy σ on as if σ were choosing "which of the k MD strategies to advance" at each computational step. That is, let α be some path produced by a computation under a strategy σ, then the "MD strategy to advance" chosen by σ after α is the MD strategy π_i where i is such that
* if the last state p of α is probabilistic, then i is the _-mode of the path α,,q, for any (p,,q)∈(p) (note that i does not depend on which transition of (p) is chosen);
* if the last state p of α is non-deterministic, then let t=(p,,q)∈(p) be the transition chosen by σ in α. Then i is the _-mode of the path α,,q.
Each of these MD strategies π_i can be expressed using a Markov chain _i which is initialized in state p_i. Whenever an MD strategy gets chosen, then the corresponding Markov chain makes one step. Naturally, there are some restrictions on which of the indexes can be chosen at a given time, namely a strategy can only choose an index i such that _i is currently in the same state as the Markov chain which was selected last. However, for our purposes we will consider pointing strategies that are allowed to choose any index, regardless of the current situation. We shall call a VASS MDP where such pointing strategies are allowed, while also adding a special “die” transition that causes instant termination an extended VASS MDP.
Formally speaking, let π_1,…, π_k and p_1,…, p_k be the MD strategies and states from Lemma <ref> associated with _. An extended VASS MDP associated to the 1-dimensional VASS MDP is the 2-dimensional VASS MDP '= Q', (Q_n',Q_p'),T',P' where Q'=Q^k×{0,1,…,k }, Q_n'=Q^k×{0 }, Q_p=Q^k×{1,…,k }, and
* T'=T_n'∪ T_p'∪ T_die' where
* T_n'={((p_1,…,p_k,0)),(0,0),(p_1,…,p_k,i)| (p_1,…,p_k)∈ Q^k, i∈{1,…,k}};
* T_p'= {((p_1,…,p_k,i),(_i,0),(p_1,…,p_i-1,q_i,p_i+1,…,p_k,0))| (p_1,…,p_k)∈ Q^k, i∈{1,…,k}, and either π_i(p_i)=(p_i,_i,q_i) or both of p_i∈ Q_p and (p_i,_i,q_i)∈ T };
* T_die'={(p,(0,-1),p)| p∈ Q_n'};
* P'(((p_1,…,p_k,i),(_i,0),(p_1,…,p_i-1,q_i,p_i+1,…,p_k,0)))=
1 p_i∈ Q_n
P((p_i,_i,q_i)) p_i∈ Q_p
We call strategies on the extended VASS MDP pointing strategies. Note that each strategy on a VASS MDP has an equivalent pointing strategy. Whenever a pointing strategy σ chooses a transition ((p_1,…,p_k,0),(0,0),(p_1,…,p_k,i)), then we say σ pointed at the Markov chain _i.
Note that in the following we only consider computations on the extended VASS MDP initiated in the initial state (p_1,…,p_k,0), and with the second counter being set to 0, so to simplify the notation, we will write only ^σ_n instead of ^σ_(p_1,…,p_k,0)(n,0).
Given a sequence of strategies σ_1,σ_2,… we will define a sequence of pointing strategies σ_1^δ,σ_2^δ,… such that each σ_n^δ in some sense “behaves as” σ_n, but at the same time it “precomputes” the individual Markov chains.
Since a formal description of σ_n^δ would be overly complicated, we will give only a high level description of σ_n^δ. The sequence σ_1^δ,σ_2^δ,… is parameterized by 0<δ<1. To help us define the behavior of σ_n^δ, we assume σ_n^δ “remembers” (it can always compute these from the input) some paths γ_1,…,γ_k,α. At the beginning these are all initialized to γ_1=…=γ_k=α=ϵ.
A computation under σ_n^δ operates as follows: First σ_n^δ internally selects i∈{1,…,k } that σ_n would select after α; that is i is the _-mode of α', where α' is such that if the last state p of α is probabilistic then α' is α extended by a single transition, and if p is nondeterministic then α'=α,,q is α extended by the transition (p,,q) where (p,,q) is the transition chosen by σ_n in α (i.e. (p,,q) is chosen at random using the probabilistic distribution σ_n(α)). Then σ_n^δ asks if γ_i≠ϵ, if yes then it skips to step 2), otherwise it first performs step 1) before moving to step 2):
* Let (p_1,…,p_k,0) be the current state of '. Then in each non-deterministic state σ_n^δ keeps pointing at _i until either, if p_i is not a state of a BSCC of M_i, it reaches a state (p_1,…,p_i-1,q_i,p_i+1,…,p_k,0) where q_i is a state of a BSCC of M_i while, or if p_i is a state of a BSCC of _i then σ_n^δ stops pointing at _i with probability 1/2 each time the computation returns to (p_1,…,p_k,0).
In both cases, if this takes more then 2n^δ steps then σ_n^δ terminates using the “die” transitions (i.e. σ_n^δ keeps reducing the second counter until termination using the transitions from T_die'). After this ends, σ_n^δ sets γ_i to the path generated by the probabilistic transitions along this iterating (note that this can be seen as a path on _i).
* Let γ_i=p_1,_1,p_2… p_ℓ. Since ℓ>1 we have all the information needed to know which index σ_n would have chosen in it's next step. Let α'=α,_1,p_2 be α extended by the transition (p_1,_1,p_2). Then σ_n^δ replaces α with α', and γ_i with the path p_2… p_ℓ obtained by removing the first transition from γ_i.
At this point this process repeats, until α is a terminating path for initial counter value n on , at which point σ^δ_n terminates using the transitions from T_die'.
Let l=k· u, where u is the maximal possible change of the counter per single transition. When started in the initial state (p_1,…,p_k,0), we can view σ_n^δ as if we were computing as per σ_n, but occasionally we made some “extra” (precomputed) steps in some of the Markov chains. These “extra” steps correspond exactly to the paths γ_1,…,γ_k, and since probability of these being longer then kn^δ decreases exponentially with n, the probability. that σ_n^δ started with initial counter values (n+ln^δ,0) terminates before σ_n would have for initial counter value n in the first n^2+ϵ steps, goes to 0 as n goes to ∞, for each ϵ>0. Therefore, if it were to hold that σ_n can perform more then n^2+ϵ steps with probability at least a>0 for some ϵ>0 and for infinitely many n, conditioned that =β for some β (note that β does not depend on n), then it holds lim sup_n→∞_n+ln^δ^σ^δ_n(≥ n^2+ϵ )≥a·(β)/2>0.
Similarly, the counter value of the first counter c of ', when computing under σ_n^δ from initial value n+ln^δ is at each point at most n+ln^δ plus the effect of the paths γ_1,…, γ_k, and α. As the length of all of γ_1,…,γ_k is at most kn^δ, their total effect on the counter at each point can be at most ln^δ. And α is the path generated by a computation of σ_n. Therefore, if it were to hold that σ_n can pump the counter to more then n^1+ϵ with probability at least a>0 for some ϵ>0 and for infinitely many n, conditioned that =β for some β (note that β does not depend on n), then it holds lim sup_n→∞_n+ln^δ^σ^δ_n([c]≥ n^1+ϵ-ln^δ )≥a·(β)/2>0.
Therefore the following two lemmatta imply Lemmata <ref> and <ref>.
If is a one-dimensional VASS MDP such that there is no increasing BSCC of A_σ for any σ∈Σ_, then
lim sup_n→∞_n+ln^δ^σ^δ_n([c]≥ n^1+ϵ-ln^δ )=0
for each 0<δ<ϵ<1.
If is a one-dimensional VASS MDP such that there is no increasing or bounded-zero BSCC of A_σ for any σ∈Σ_, then
lim sup_n→∞_n+ln^δ^σ^δ_n(≥ n^2+ϵ )= 0
for each 0<δ<ϵ<1.
Let us begin with a proof for Lemma <ref>. For simplification, let us assume that if the first counter of ' becomes negative while iterating in some Markov chain _j before it hits the target state of _j, that is while performing step 1) as per the description of σ_n^δ, then the computation does not terminate and instead it continues until this target state is reached at which point the computation terminates if the counter is still negative. Clearly this can only prolong the computation. Therefore, each Markov chain _j contributes to computation of σ_n^δ by at most a single path α_j (to reach a BSCC), and then of cycles over some state of a BSCC of _j. Let X_i^j denote the effect of the i-th cycle of _j performed under the computation of σ_n^δ. As each BSCC of A_σ for any σ∈Σ_ is either unbounded-zero or decreasing, it holds that either ^σ_n^δ(X_i^j)= 0 while Var^σ_n^δ(X_i^j)>0 (unbounded-zero), or ^σ_n^δ(X_i^j)<0 (decreasing).
It also holds that ^σ_n^δ((α_j))=b_j for some constant b_j, and as the length of each α_i is bounded by n^δ, the maximal possible effect of all α_1,…,α_k on the counter is ln^δ. The maximal length of each cycle is bounded by n^δ, therefore we can upper bound the expected length of all cycles of _j as n^δ times the expected number of such cycles. Clearly the expected number of cycles corresponding to decreasing BSCCs are at most linear as each such cycle moves expectation closer to 0, and there is no way to move the expectation away from 0 by more then a constant.
To bound the expected number of cycles corresponding to class unbounded-zero BSCCs, we shall use the following lemma that is proven in appendix <ref>.
Let be a one-dimensional VASS MDP, and let X_1,X_2,… be random variables s.t. each X_i corresponds to the effect of a path on some unbounded-zero BSCC 𝔹 of _σ for some σ∈Σ_, that starts in some state p of 𝔹 and terminates with probability 1/2 every time p is reached again. Let S_0,S_1,… be defined as S_0=0, S_i=S_i-1+X_i, and τ_n be a stopping time such that either S_τ_n≤ -n or S_τ_n≥ n. Then it holds (τ_n)∈(n^2).
It says that the expected number of cycles before their cumulative effect exceeds either n^1+μ or -n^1+μ is in (n^2+2μ), for each μ. Therefore the expected number of cycles upon either effect of -n-2ln^δ or n^1+ϵ is in (n^2+2ϵ) for all ϵ>0, as for all sufficiently large n it holds -n^1+ϵ≤ -n-2ln^δ. Therefore the expected length of whole computation, when started in n+ln^δ, and stopped upon either effect of -n-ln^δ or n^1+ϵ is in (n^2+2ϵn^δ)=(n^2+2ϵ+δ).
Let X_n,ϵ^δ be the random variable encoding the number of steps the computation under σ_n^δ takes before the effect on counter is either less than -n-ln^δ, or at least n^1+ϵ, or until σ_n^δ performs a “die” move, whichever comes first. The above says that ^σ_n^δ_n+kn^δ(X_n,ϵ^δ)≤ an^2+2ϵ+δ for some constant a. Furthermore, let P_n,ϵ^δ be the probability that the computation under σ_n^δ reaches effect on counter at least n^1+ϵ before either hitting effect less then -n-ln^δ or performing a “die” move. Note that for 0<ϵ'<ϵ, it holds _n+ln^δ^σ_n^δ(X_n,ϵ^δ≥ X_n,ϵ'^δ) ≤ P_n,ϵ'^δ. Also note that it holds _n+ln^δ^σ^δ_n([c]≥ n^1+ϵ-ln^δ )≤ P_n,ϵ'^δ for each 0<ϵ'<ϵ and for all sufficiently large n, as for sufficiently large n if the counter reaches n^1+ϵ-ln^δ then it had to previously reach n^1+ϵ', as n^1+ϵ' grows asymptotically slower then n^1+ϵ-ln^δ.
Now we shall use the following Lemma that is proven in the Appendix <ref>.
For each 0<δ<ϵ<1, it holds lim_n→∞ P_n,ϵ^δ=0.
Note that this already implies Lemma <ref>.
To show also Lemma <ref>, let us write
_n+ln^δ^σ_n^δ(≥ n^2+ϵ) ≤_n+ln^δ^σ_n^δ(X_n,ϵ^δ≥ n^2+ϵ) +P_n,ϵ^δ
and for any 0<ϵ'<ϵ
_n+ln^δ^σ_n^δ(X_n,ϵ^δ≥ n^2+ϵ)
≤_n+ln^δ^σ_n^δ(X_n,ϵ'^δ≥ n^2+ϵ)+_n+ln^δ^σ_n^δ(X_n,ϵ^δ≥ X_n,ϵ'^δ)
≤_n+ln^δ^σ_n^δ(X_n,ϵ'^δ≥ n^2+ϵ)+P_n,ϵ'^δ
and from Markov inequality we get
_n+ln^δ^σ_n^δ(X_n,ϵ'^δ≥ n^2+ϵ)
≤an^2+2ϵ'+δ/n^2+ϵ
Which gives us
_n+ln^δ^σ_n^δ(≥ n^2+ϵ)
≤_n+ln^δ^σ_n^δ(X_n,ϵ'^δ≥ n^2+ϵ)+P_n,ϵ'^δ +P_n,ϵ^δ≤an^2+2ϵ'+δ/n^2+ϵ +P_n,ϵ'^δ +P_n,ϵ^δ
As this holds for each 0<ϵ'<ϵ, if we put ϵ'=ϵ-δ/4, then 2ϵ'+δ=ϵ-δ/2+2δ/2=ϵ+δ/2<ϵ if δ<ϵ, and therefore lim_n→∞an^2+2ϵ'+δ/n^2+ϵ=0. Therefore it holds
lim_n→∞_n+ln^δ^σ_n^δ(≥ n^2+ϵ)
≤lim_n→∞ (an^2+2ϵ'+δ/n^2+ϵ +P_n,ϵ'^δ +P_n,ϵ^δ )=0
§.§ Proof of Lemma <ref>
[<ref>]
Let be a one-dimensional VASS MDP, and let X_1,X_2,… be random variables s.t. each X_i corresponds to the effect of a path on some unbounded-zero BSCC 𝔹 of _σ for some σ∈Σ_, that starts in some state p of 𝔹 and terminates with probability 1/2 every time p is reached again. Let S_0,S_1,… be defined as S_0=0, S_i=S_i-1+X_i, and τ_n be a stopping time such that either S_τ_n≤ -n or S_τ_n≥ n. Then it holds (τ_n)∈(n^2).
Let us begin by showing the following technical result.
Let be a one-dimensional VASS MDP. Let 𝔹 be an unbounded-zero BSCC of _σ for a strategy σ∈Σ_, and let p be a state of 𝔹. Let X denote the random variable representing the effect of a path under σ initiated in p, that ends with probability 1/2 every time the computation returns to p. Then there exists a function m:ℕ→ℕ such that m(n)≥ 2n, m∈(n), and such that for all sufficiently large n we get for
X' =
m X≤ -2n or X≥ m(n)
X else
that it holds ^σ_p(X')≥ 0 and Var^σ_p(X')≥ a for some a>0 that does not depend on n.
Since 𝔹 is unbounded-zero, there exists both a positive as well as a negative cycle on 𝔹. Therefore there exists some a>0 such that ^σ_p(X≤ -i)≥ a^i. Also X is unbounded both from above as well as from below. As every |Q| steps there is non-zero, bounded from below by a constant, probability that we terminate in at most |Q| steps, it holds for each i>0 that ^σ_p(|X|>i)≤ b^i for some b<1. Therefore also ^σ_p(X≥ i)≤ b^i. We claim the lemma holds for any m(n)≥ 4nlog_b a.
It holds
∑_i=m(n)^∞ i_p^σ(X=i)
≤∑_i=m(n)^∞ i_p^σ(X≥ i)
≤∑_i=m(n)^∞ ib^i=b^m(n)(-bm(n)+b+m(n))/(b-1)^2
And if we put in the value m(n)=x4nlog_b a, for x≥ 1 we obtain
b^m(n)(-bm(n)+b+m(n))/(b-1)^2
= b^x4nlog_b a(-b(x4nlog_b a)+b+(x4nlog_b a))/(b-1)^2
=
=
a^x4n(-bx4nlog_b a+b+x4nlog_b a)/(b-1)^2
And furthermore,
m(n)(_p^σ(X≥ m(n))+_p^σ(X≤ -2n))
≥
m(n)_p^σ(X≤ -2n)
≥
m(n)a^2n
=
a^2nx4nlog_b a
Also, as a<1, it holds for all sufficiently large n that
a^x4n(-bx4nlog_b a+b+x4nlog_b a)/(b-1)^2 <
a^2nx4nlog_b a
Therefore it holds
^σ_p(X')
=
m(n)(_p^σ(X≥ m(n))+_p^σ(X≤ -2n)) +∑_i=-2n+1^m(n)-1 i_p^σ(X=i)
≥
≥∑_i=m(n)^∞ i_p^σ(X=i) + ∑_i=-2n+1^m(n)-1 i_p^σ(X=i)
And it also holds
0=^σ_p(X)
=
∑_i=-∞^∞ i_p^σ(X=i)
=
∑_i=-∞^-2n i_p^σ(X=i) + ∑_i=-2n+1^m(n)-1 i_p^σ(X=i) + ∑_i=m(n)^∞ i_p^σ(X=i)
≤
≤∑_i=m(n)^∞ i_p^σ(X=i) + ∑_i=-2n+1^m(n)-1 i_p^σ(X=i)
And therefore
^σ_p(X')≥∑_i=m(n)^∞ i_p^σ(X=i) + ∑_i=-2n+1^m(n)-1 i_p^σ(X=i)≥ 0
For the part about Var(X'). Since 𝔹 is unbounded-zero, it holds that ^σ_p(X)^2=Var^σ_p(X)≥ y>0 for some y. Therefore it holds for each n that
0<y≤^σ_p(X^2 )= ∑_i=1^∞ i^2_p^σ(|X|=i) = ∑_i=1^m(n) i^2_p^σ(|X|=i) +∑_i=m(n)^∞ i^2_p^σ(|X|=i)≤
≤∑_i=1^m(n) i^2_p^σ(|X|=i) +∑_i=m(n)^∞ i^2 _p^σ(|X|≥ i)
≤∑_i=1^m(n) i^2_p^σ(|X|=i) +∑_i=m(n)^∞ i^2b^i
And
∑_i=m(n)^∞ i^2b^i =
b^m(n) (m^2(n) (-b^2) + 2 m^2(n) b - m^2(n) + 2 m(n) b^2 - 2 m(n) b - b^2 - b)/(b - 1)^3
But this fraction is dominated by b^m(n) which decreases exponentially in n (as b<1). Therefore for all sufficiently large n it holds y/2 ≤∑_i=1^m(n) i^2_p^σ(|X|=i). But this gives us ^σ_p((X')^2)≥∑_i=1^m(n) i^2_p^σ(|X|=i)≥ b/2 for each m(n)≥ n and all sufficiently large n.
Let us now restate the Lemma <ref>.
[<ref>]
Let be a one-dimensional VASS MDP, and let X_1,X_2,… be random variables s.t. each X_i corresponds to the effect of a path on some unbounded-zero BSCC 𝔹 of _σ for some σ∈Σ_, that starts in some state p of 𝔹 and terminates with probability 1/2 every time p is reached again. Let S_0,S_1,… be defined as S_0=0, S_i=S_i-1+X_i, and τ_n be a stopping time such that either S_τ_n≤ -n or S_τ_n≥ n. Then it holds (τ_n)∈(n^2).
As there are only finitely many BSCCs of _σ for σ∈Σ_, and each of them has only finitely many states, there are only finitely many distributions D_1,…,D_x such that each X_i≈ D_y for some 1≤ y ≤ x.
Let X_1^n,X_2^n,… be random variables such that
X_i^n =
m(n) X_i≤ -2n or X_i≥ m(n)
X_i else
where m(n)=an is the maximal value of m(n) obtained from Lemma <ref> for any unbounded-zero BSCC of any _σ for any σ∈Σ_, and a is some constant. Then it holds that (X_i^n)≥ 0, and there exists b>0 that does not depend on n such that ((X_i^n)^2)≥ b for each i.
Let S_0^n,S_1^n,… be a random walk defined as S_0^n=2n, S_i^n=S_i-1^n+X_i^n, and let τ_n' be a stopping time such that either S_τ_n'^n≤ 2n-n or S^n_τ_n'≥ 2n+n. Clearly it holds that τ_n'=τ_n, therefore it is enough to show that (τ_n')∈(n^2).
Let us proceed by showing the following.
Let M_i^n=(S_i^n)^2 - bi. Then M_0^n,M_1^n,… is a submartingale.
(M^n_i+1| X_i^n,…,X_1^n)
=
((S_i+1^n)^2 -b(i+1) | X_i^n,…,X_1^n)
=
((S_i^n+X_i+1^n)^2 -b(i+1) | X_i^n,…,X_1^n)
=
((S_i^n)^2+2S_i^nX_i+1^n + (X_i+1^n)^2 -b(i+1) | X_i^n,…,X_1^n)
=
(S_i^n)^2+2S_i^n(X_i+1^n| X_i^n,…,X_1^n)) + ((X_i+1^n)^2| X_i^n,…,X_1^n) -b(i+1)
≥
(S_i^n)^2+0 + b -b(i+1)
=
(S_i^n)^2 +b - bi -b
=
(S_i^n)^2 - bi
=
M_i
As it holds that (τ_n')<∞, from the optional stopping theorem we obtain
(M_0^n)≤(M^n_τ_n') which can be rewritten as (2n)^2 ≤((S^n_τ_n')^2 -bτ_n') =((S_τ_n'^n)^2) -b(τ_n').
As it holds (S^n_τ_n')^2≤ (3n+m(n))^2= (3n+an)^2= (9+6a+a^2)n^2 this gives us 4n^2+b(τ_n')≤((S^n_τ')^2)≤ (9+6a+a^2)n^2 and so (τ_n')≤(5+6a+a^2)n^2/b∈(n^2).
§.§ Proof of Lemma <ref>
[<ref>]
For each 0<δ<ϵ<1, it holds lim_n→∞ P_n,ϵ^δ=0.
Assume there exist some 0<δ<ϵ such that lim sup_n→∞ P_n,ϵ^δ=a>0. Then for each n_0, there exists n>n_0 such that P_n,ϵ^δ>a/2. Most notably, this means that the effect of the path α in σ_n^δ (see definition of σ_n^δ) is at least n^1+ϵ-ln^δ with probability at least a/2. But as α can be equally seen as a path under σ_n, this means that also the strategy σ_n reaches effect n^1+ϵ-ln^δ before the effect -n with probability R_n,ϵ^δ≥ a/2, for infinitely many n.
Let type β_n be some type with the largest (β) among all types β, such that with probability at least a/2 σ_n reaches the effect at least n^1+ϵ-ln^δ before the effect -n conditioned the computation follows β. If the length of β_n were dependent on n then as probability of all long types decreases exponentially fast with their length, it could not hold that R_n,ϵ^δ>a/2 for arbitrarily large n. Therefore there must exist infinitely many values n_1,n_2,… such that β_n_1=β_n_2=…, let us denote this type by β=M_1,…,M_x (i.e., β=β_n_1).
This means that n is not an upper estimate of [c] for type β. But in the next Lemma we are going to show that n is an upper estimate of [c] for type β, thus showing a contradiction.
For each 0<ϵ_1 there exists 0<ϵ_2 and 0<b such that
^σ_n_p ([c] ≥ n^1+ϵ_1|=β )≤ bn^-ϵ_2
for each state p of M_1.
We are going to do an induction over 1≤ i≤ x.
Base case: i=1, then from Lemma <ref> we have
^σ_n_p ([c] ≥ n^1+ϵ_1|=M_1)≤ bn^-ϵ_1
for some constant b and for each 0<ϵ_1.
Induction step: Assume this holds for i<x, let us now show it holds for i+1 as well.
From induction assumption we have that for each 0<ϵ_1' there exists 0<ϵ_2' and 0<b' such that ^σ_n_p ([c] ≥ n^1+ϵ_1'|= M_1,…,M_i )≤ b'n^-ϵ_2' . Therefore, when the computation reaches M_i+1, the counter is larger than n^1+ϵ_1' with probability at most b'n^-ϵ_2'. As such we can express for each 0<ϵ_1'<ϵ
^σ_n_p ([c] ≥ n^1+ϵ|= M_1,…,M_i+1 )
≤
≤^σ_n_p ([c] ≥ n^1+ϵ_1'|= M_1,…,M_i )+
∑_r∈ M_i+1
P_q
^σ_n^r_r n^1+ϵ_1'([c] ≥ n^1+ϵ|= M_i+1)
=
=
^σ_n_p ([c] ≥ n^1+ϵ_1'|= M_1,…,M_i )+
^σ_n^q_q n^1+ϵ_1'([c] ≥ n^1+ϵ|= M_i+1)
where σ_n^r is the strategy which computes as if σ_n after the path from p to r for each r being a state of M_i+1, P_r=^σ_n_p({α|the first state of M_i+1 in α is r}), and q is the state of M_i+1 such that for each state r of M_i+1 it holds
^σ_n^r_r n^1+ϵ_1'([c] ≥ n^1+ϵ|= M_i+1)≤^σ_n^q_q n^1+ϵ_1'([c] ≥ n^1+ϵ|= M_i+1)
But from Lemma <ref> we have that
^σ_n^q_q n^1+ϵ_1'([c] ≥ n^1+ϵ|= M_i+1)
=
=
^σ_n^q_q n^1+ϵ_1'([c] ≥ (n^1+ϵ_1')^log_n^1+ϵ_1'n^1+ϵ|= M_i+1)
≤
b(n^1+ϵ_1')^1-log_n^1+ϵ_1'n^1+ϵ
Let y=1-log_n^1+ϵ_1'n^1+ϵ, note that y<0 since ϵ_1'<ϵ. Then we can write
^σ_n_p ([c] ≥ n^1+ϵ|= M_1,…,M_i+1 )
≤
b'n^-ϵ_2'+
bn^y
which for each ϵ_1>ϵ gives
^σ_n_p ([c] ≥ n^1+ϵ_1|= M_1,…,M_i+1 )
≤
≤^σ_n_p ([c] ≥ n^1+ϵ|= M_1,…,M_i+1 )
≤
b'n^-ϵ_2'+
bn^y
And for ϵ_2=min (ϵ_2',-y) and b̂=max(2b',2b) this gives use
^σ_n_p ([c] ≥ n^1+ϵ_1|= M_1,…,M_i+1 )
≤
b'n^-ϵ_1'+
bn^y≤b̂n^-ϵ_2
thus the induction step holds.
§.§ Proof of Lemma <ref>
[<ref>]
An energy MDP has a safe configuration iff there exists a non-decreasing BSCC of _σ for some σ∈Σ_.
The ⇐ direction is trivial. For the ⇒ direction assume the opposite. Then there exists a safe configuration, and a strategy such that the counter never decreases below some bound. But then from Lemma <ref> we can view the strategy as if choosing which of the finitely many Markov chains is to advance. And since there is no non-decreasing BSCC of _σ for any σ∈Σ_, each of these Markov chains contains a negative cycle. Therefore after every at most finite number of steps the counter has non-zero probability of decreasing, thus it cannot be bounded from below for the entire computation.
§.§ Proof of Lemma <ref>
[<ref>]
The problem whether there exists a non-decreasing BSCC of _σ for some σ∈Σ_ such that contains a given state p ∈ Q is -complete.
This problem being in is easy as we simply have to guess a BSCC of some MD strategy, and then verify that it contains no negative cycle while containing p. For the -hardness let us show a reduction from the -complete problem of deciding whether a given graph G contains a Hamiltonian cycle.
Let G=(V,E) be the graph for which we want to decide existence of a Hamiltonian cycle, and let p∈ V be one of it's vertices.
Let be a 1-dimensional VASS MDP, whose set of states is V, all states are nondeterministic, and the set of transitions is T such that whenever there is an edge {q,r}∈ E, q≠ p≠ r, then contains the transitions (q,+1,r),(r,+1,q), and for each edge {p,q}∈ E, contains the transitions (q,+1,p),(p,-|V|+1,q)
We now claim that G contains a Hamiltonian path iff there exists a non-decreasing BSCC 𝔹 of _σ for some σ∈Σ_ such that 𝔹 contains p.
First let a Hamiltonian cycle α=p_1,t_1,p_2,…,p_l,t_l,p_1 exist. Then for the MD strategy σ(p_j)=t_j, _σ surely contains exactly one BSCC that contains p, and it contains exactly one cycle whose effect is 0. Thus it is non-decreasing.
Now let there exists a non-decreasing BSCC 𝔹 of _σ for some σ∈Σ_ such that 𝔹 contains p. Then since the effect of every outgoing transition of p is -|V|+1, the effect of every other transition is +1, and 𝔹 contains no negative cycles, there must be at least |V| transitions in 𝔹. But as σ is an MD strategy, there can be at most one transition per state, and so 𝔹 must contain every single state of G. But this means that the computation under π follows a Hamiltonian cycle.
The problem whether there exists a bounded-zero BSCC of _σ for some σ∈Σ_ is -complete for general one-dimensional VASS MDPs.
This follows from the proof above of the previous Lemma, as any bounded-zero BSCC of _σ for some σ∈Σ_ in the VASS MDP constructed for the graph G must contain p, while the BSCC associated to the strategy obtained from a hamiltonian cycle is bounded-zero.
|
http://arxiv.org/abs/2307.04319v1 | 20230710032047 | New Variants of Frank-Wolfe Algorithm for Video Co-localization Problem | [
"Hamid Nazari"
] | cs.CV | [
"cs.CV",
"math.OC"
] |
FW Variants in Video Co-Localization
Clemson University, Clemson, SC
New Variants of Frank-Wolfe Algorithm for Video Co-localization Problem
Hamid Nazari
=======================================================================
The co-localization problem is a model that simultaneously localizes objects of the same class within a series of images or videos. In <cit.>, authors present new variants of the Frank-Wolfe algorithm (aka conditional gradient) that increase the efficiency in solving the image and video co-localization problems. The authors show the efficiency of their methods with the rate of decrease in a value called the Wolfe gap in each iteration of the algorithm. In this project, inspired by the conditional gradient sliding algorithm (CGS) <cit.>, We propose algorithms for solving such problems and demonstrate the efficiency of the proposed algorithms through numerical experiments. The efficiency of these methods with respect to the Wolfe gap is compared with implementing them on the YouTube-Objects dataset for videos.
§ IMAGE AND VIDEO CO-LOCALIZATION PROBLEMS
Problems in recognizing and localizing particular objects in images and videos have received much attention recently as internet photo and video sharing have become increasingly popular.
Co-localization involves localizing with bounding boxes in a set of images or videos as a sequence of images (frames).
§ MODEL SETUP FOR IMAGES
Our ultimate goal is to localize the common object in a set of images or in a series of frames of a video. Here we first have a brief review of image and video models based on formulation in <cit.>. To this end we review the required back grounds in each step as much as the features and variables in the mathematical programming model become understandable. Note that this formulation is based on formulation introduced in <cit.> for image co-localization. Quadratic formulation that we review in this section localizes any set of images and videos, simultaneously. In <cit.> also, we can find similar discrete optimization approaches in various computer vision applications.
§.§ Objectness for Images
Suppose that we have a set ℐ = {I_1, I_2, …, I_n} of n given images, and our goal is to localize the common object in each image. One approach is to find candidate boxes in each image that potentially contain an object using objectness <cit.>.
While object detectors for images are usually specialized for one object class such as cars, airplanes, cats, or dogs, objectness quantifies how likely it is for an image window to cover an object of any class. In an image, objects have a well-defined boundary and center, cats, dogs, and chairs, as opposed to indefinite background, such as walls, sky, grass, and road. Figure <ref> illustrates the desired behavior of an objectness measure. Green windows must score highest windows fitting an object tight, blue windows should score lower windows covering partly an object and partly the background, and red windows should score lowest windows containing only partial background. This approach and the way we score the windows is designed in <cit.> and explicitly trained to distinguish windows containing an object from background windows.
Using objectness, we generate m candidate boxes (e.g. green boxes in Figure <ref>) for each image that could potentially contain an object. In other words, if j∈{1,2,…,n} we define ℬ_j to be the set of all boxed in image I_j∈ℐ. Then the goal is to select the box that contains the object, from each image, jointly. Also. for simplicity let ℬ = ℬ_1 ∪ℬ_2 ∪⋯∪ℬ_n and n_b = nm the total number of boxes in all images.
§.§ Feature representation
Assume that we have determined m candidate boxes in each of two the different images I_i and I_j for any i,j∈{1,2,…, m}. A common object in I_i and I_j might be in different shape, scale, color, brightness, angle and many other features. Therefore, it is critical to extract distinctive invariant features from images that can be used to perform reliable matching between different views of an object. David G. Lowe in <cit.> introduces a method that finds features that are invariant to image scaling and rotation, and partially invariant to change in illumination and 3D camera view point. Using his method, large number of features can be extracted from typical images with efficient algorithms, as well as the cost of extracting these features is minimized. The major stages of computation used to generate the set of image features are as follows.
* Scale-space extrema detection: The first stage of computation searches over all scales and image locations. It is implemented efficiently by using a difference-of-Gaussian function to identify potential interest points that are invariant to scale and orientation.
* Keypoint localization:
At each candidate location, a detailed model is fit to determine location and scale. Keypoints are selected based on measures of their stability.
* Orientation assignment:
One or more orientations are assigned to each keypoint location based on local image gradient directions. All future operations are performed on image data that has been transformed relative to the assigned orientation, scale, and location for each feature, thereby providing invariance to these transformations.
* Keypoint descriptor:
The local image gradients are measured at the selected scale in the region around each keypoint. These are transformed into a representation that allows for significant levels of local shape distortion and change in illumination.
This process is called Scale Invariant Feature Transform (SIFT). SIFT transforms image data into scale-invariant coordinates relative to local features. Using SIFT we can generate large numbers of features that densely cover the image over full range of scales and locations.
Let b_k be a box in ℬ. Then we denote the SIFT feature representation of b_k as x_k∈^d where d = 10,000 is the dimensional feature descriptor for each box in ℬ. Finally, we stack the feature vectors to form a feature matrix X∈^n_b× d.
§.§ Prior, Similarity, and Discriminability of boxes
Let us denote the boxes that contain an instance of the common object as positive boxes, and the ones that don't as negative boxes. Then a prior is introduced for each box that represents a score that the box is positive. This happens using a saliency map <cit.> for each box and the prior is in fact the average saliency within the box, weighted by the size of the box. Finally we stack these values into the n_b dimensional vector m⃗ as the prior vector.
In addition, boxes that have the similar appearance should be labeled the same. This happens through a matrix called similarity matrix denoted by S. Similarity matrix of boxes in ℬ is based on the box feature matrix X described above. Let b_i and b_j be any two boxes in ℬ where i,j∈{1,2,…,n_b}. Then similarity matrix S∈^n_b× n_b is computed based on the χ^2-distance as
S_ij = exp-γ∑_k=1^d(x_ik - x_jk)^2/x_ik + x_jk,
where γ = (10d)^-1/2. For i and j where boxes b_i and b_j belong to the same image we set S_ij=0. Then the normalized Laplacian matrix <cit.> is computed as
ℒ = I_n_b - D^-1/2SD^-1/2,
where D is the diagonal matrix composed of row sums of S.
§.§ Model Formulation
Associated with each box b_j,k∈ℬ_j we define a binary variable z_j,k where z_j,k=1 when b_j,k is a positive box (contains an instance of the common object) and 0 otherwise. Then we define the integer vector variable
z⃗ = (z_1,1,…,z_1,m, …, z_n,1,…, z_n,m)^T∈{0,1}^n_b.
Making the assumption that in each image there exist at most 1 positive box, our set of constraints are define by
∑_k = 1^m z_j,k = 1, ∀ j ∈{1,…, n}.
As we introduced a prior for each box and defined the n_b dimensional vector of average saliency within the boxes, we obtain a linear term that penalizes less salient boxes as part of the objective function:
f_p(z⃗) := -z⃗^Tlog(m⃗).
Similarly, our choice of normalized Laplacian matrix ℒ defined in (<ref>) results in a quadratic term that handles the selection of similar boxes:
f_L(z⃗) := z⃗^Tℒz⃗.
This is motivated by the work of Shi and Malik <cit.> in which they have taken advantage of eigenvalues of the Laplacian for clustering z⃗ by the similarity matrix. In fact, they have shown that with the eigenvector corresponding to the second smallest eigenvalue of a normalized Laplacian matrix we can cluster z⃗ along the graph defined by the similarity matrix, leading to normalized cuts when used for image segmentation. Also, Belkin and Niyogi <cit.> showed that this problem is equivalent to minimizing (<ref>) under linear constraints. In fact, the similarity term works as a generative term which selects boxes that cluster well together <cit.>.
Although discriminative learning techniques such as support vector machines and ridge regression has been widely used on many supervised problems in which there are know labels, they can be used in this unsupervised case where the labels of boxes are unknown <cit.>. Motivated by <cit.>, we consider the ridge regression objective function for boxes:
min_w∈^d, c∈ 1/n_b∑_j=1^n∑_k=1^mz_j,k-wx_j,k - c_2^2 - κ/dw_2^2,
where w is the d dimensional weight vector of the classifier, and c is the bias. This cost function is being used among discriminative cost functions because the ridge regression problem has a explicit (closed form) solution for weights w and bias c which implies the quadratic function in the box labels <cit.>:
f_D(z⃗):=z⃗^T𝒜z⃗,
where
𝒜= 1/n_bΠ_n_bI_n_b-X(X^TΠ_n_bX+n_bκ I_n_b)^-1X^TΠ_n_b,
is the discriminative clustering term and Π_n_b = I_nb - 1/n_b1⃗_n_b1⃗_n_b^T in (<ref>) is the centering projection matrix. Note that this quadratic term allows us to utilize a discriminative objective function to penalize the selection of boxes whose features are not easily linearly separable from other boxes.
Summing up our results in (<ref>), (<ref>), (<ref>), and (<ref>), the optimization problem to select the best box in each image is given by
min_z⃗ z⃗^T(ℒ+μ𝒜)z⃗ - λ z⃗^Tlog(m⃗)
s.t ∑_k = 1^m z_j,k = 1, j=1,…, n
z⃗ = (z_1,1,…,z_1,m, …, z_n,1,…, z_n,m)^T∈{0,1}^n_b,
where parameter μ regularizes the trade-off between the quadratic terms (<ref>) and (<ref>), and parameter λ handles the trade-off between the linear term (<ref>) and the quadratic terms (<ref>) and (<ref>). Recall that the linear constraints ensures that one box from each image is selected in the optimal solution. Note that Hastie, Tibshirani, and Friedman in <cit.> showed that 𝒜 is a positive semi-definite matrix. Also, since matrix ℒ is positive semi-definite as well, the objective function of (<ref>) is convex.
§ MODEL SETUP FOR VIDEOS
Co-localization in a video is very similar to the image case, as a video is a sequence of images that are called frames. While an object might not have an extreme change in size, shape, color, etc in two frames in row, co-localization in a video could be a simpler task at some point. In this section we describe the localization of a common object in a set of videos. In fact, if 𝒱 = {V_1, V_2, …, V_n} is a set of n given videos, we explore an approach to localize a common object in each frame of each video. More precisely, we consider ℐ_i = {I_i1, I_i2, …, I_il_i} to be the temporally ordered set of frames of video V_i. Here I_ij is the i-th frame of the j-th video and l_i is the total number of frames, or the length of V_i for i=1,…,n and j=1,…, l_i. Similar to what we did in image case, we set ℬ_i,j to be the set of m generated candidate boxes, using objectness <cit.>, for j-th of i-th video. Then, considering l_i frames in video i and m boxes in each frame, we set n_b^v = ∑_i=1^n l_im to be the total number of boxes in 𝒱, the set of all videos.
Note that, if we set ℐ = {ℐ_1, ℐ_2,…, ℐ_n} to be the ordered set of all frames in 𝒱, model (<ref>) returns a single box in each frame (image) as an optimal solution. Although the objective function of this model capture the box prior, similarity, and discriminability within different videos, as we can define a more efficient similarity mapping withing boxes in the sequence of frames in a video.
§.§ Temporal Consistency In Frames of a Video
As discussed earlier in this section, objects in consecutive frames in video data are less likely to change drastically in appearance, position, and size. This is a motivation to use a separate prior for frames or images in video case. Temporal consistency <cit.> is a powerful prior that is often leveraged in video tasks such as tracking <cit.>. In this approach, in consecutive frames, boxes with great difference in size and position should be unlikely to be selected together. To this end, a simple temporal similarity measure is defined between two boxes b_i and b_j from consecutive frames with:
s_temporal(b_i, b_j) := exp-b_i^center - b_j^center_2 - b_i^area - b_j^area/max(b_i^area , b_j^area)_2.
A few comments comes in place about the prior defines in (<ref>). First, b_i^area is the vector of the pixel area of box b_i and b_i^center are the vectors of the center coordinates of box b_i, normalized by the width and height of the frame. Second, the metric defined in (<ref>) is a similarity metric that is defined between all pairs of boxes in adjacent frames. From this metric we can define a weighted graph 𝒢_i for video 𝒱_i for i = 1,2, …, n with nodes being the boxes in each frame and edges connecting boxes in consecutive frames and weights of edges defined as temporal similarity in (<ref>). Figure <ref> is a graphical representation of graph 𝒢_i. For small values of similarity measure with some threshold we disconnect the nodes and remove the edge. Finally, as long as we can create a weighted graph with boxes, any similarity measure other than the temporal consistency in (<ref>) can be used to weight the edges between two boxes, which makes the temporal framework pretty flexible.
Let us define
S_t(i,j) = {[ s_temporal(b_i, b_j) if frames i and j are adjacent; 0 otherwise ].
to be the similarity matrix define by the temporal similarity measure, where b_i and b_j are any two boxes in the set of all boxes in 𝒱. Similar to our approach to obtain (<ref>), with S_t we can compute the normalized Laplacian
U = I_n_b^v - D^-1/2S_tD^-1/2,
where D is the diagonal matrix composed of the row sums of S_t. This matrix encourages us to select boxes that are similar based on the temporal similarity metric (<ref>).
§.§ Video Model Formulation
As we discussed above, temporal similarity suggests a weighted graph 𝒢_i for video 𝒱_i for i=1,2,…,n. In fact, a valid path in 𝒢_i from the the first to the last frame in 𝒱_i corresponds to feasible boxes chosen in each frame of 𝒱_i. This motivates us to define a binary variable to be on when there is an edge between any two nodes in 𝒢_i and off otherwise. In better words, we define the binary variable y_i,j,k for video i and boxes b_j and b_k in 𝒱_i as
y_i,j,k = {[ 1 if boxes b_j and b_k contain the common object; 0 otherwise. ].
In fact, variable y_i,j,k corresponds to the existence of edge between boxes b_j and b_k in 𝒱_i. Also, we define the binary variable z_i,j,k to be 1 if the box b_k in frame j of video i contains the common object, and 0 otherwise. A type of constraint that we need to consider here is the fact that there might exist an edge between boxes b_j and b_k only if they are boxes in two consecutive frames. Then, for a typical box b_k in frame j of video 𝒱_i, we define index sets p(k_j) and c(k_j) to be the set of indices of parents and children boxes in frames j+1 and j-1, respectively, that are connected to b_k in frame j in the graph 𝒢_i. Therefore, a required set of constraints for localization in video case are defines by:
z_i,j,k = ∑_l∈ p(k_j) y_i,l,k_j = ∑_l∈ c(k_j)y_i,k_j,l, i = 1,…, n, j=1,…,l_i, k=1,…,m.
The other set of constraints, which are quite similar to the image co-localization case, are the set of constraints restricting each frame of each video to has only one box that contains the common object. These constraints are defined by:
∑_k = 1^m z_i,j,k = 1, i=1,2,…,n, j = 1,2,…, l_i.
Finally, we define the vectors of variables
z⃗ = (z_1,1,1,z_1,1,2, …, z_i,j,k, …, z_n,l_n,m)^T∈{0,1}^n_b^v
where n_b^v = m∑_i=1^nl_i. Then if we combine the temporal terms defined by (<ref>) with the terms in the objective function of the original image model (<ref>), then with constraint defines in (<ref>) and (<ref>), we obtain the following optimization formulation to select the box containing the common object in each frame of video:
min_z⃗, y z⃗^T(L+μ A + μ_t U)z⃗ - λ z⃗^Tlog(m⃗)
s.t. ∑_k = 1^m z_i,j,k = 1, i=1,2,…,n, j = 1,2,…, l_i,
z_i,j,k = ∑_l∈ p(k_j) y_i,l,k_j = ∑_l∈ c(k_j)y_i,k_j,l
i = 1,…, n, j=1,…,l_i, k_j=1,…,m,
y_i,s,t∈{0,1}, i = 1,…,n, s,t = 1,…,m
z⃗=(z_1,1,1,z_1,1,2, …, z_i,j,k, …, z_n,l_n,m)^T ∈{0,1}^n_b^v,
where μ_t is the trade-off weight for the temporal Laplacian matrix. Note that with the new objective function in problem (<ref>) the extra constraint (<ref>) in video case is necessary and without that the temporal Laplacian matrix would lead the solution to an invalid path. This formulation allows us to incorporate temporal consistency into the image model.
§ OPTIMIZATION
The formulation (<ref>) obtained to find the best box in each image of the set of the given images is a standard binary constrained quadratic problem. The only issue that makes this problem a non-convex problem are the binary constraints. Relaxing these constraints to the continuous linear constraints lead the problem to the convex optimization problem and can be solved efficiently using standard methods. In fact, first order methods such as like Frank-Wolfe method that we discussed in previous chapters can handle the relaxed problem efficiently as they linearize the quadratic objective function and use a linear optimization oracle in each iteration.
Denoting the feasible region of the problem (<ref>) by 𝒫, we can follow a similar approach for this problem as we did for (<ref>). We can relax the discrete non-convex set 𝒫 into the convex hull, or the integer hull for this specific case, conv(𝒫). Although standard algorithms such as interior point methods can be applied to solve this problem, but as the number of videos increases to hundreds and the dimension of the problem increases exponentially, such problems with complexity of 𝒪(N^3) with number of boxes, would perform very weakly. Similarly, for the relaxation of the video problem we will show in our implementations section that suggested first order methods perform efficiently. We will also propose a first order method later in this chapter and will show that it performs better than other first order methods that have been applied to this problem.
Note that, the constraints defining the set 𝒫 are separable in each video. In fact, for each video, these constraints are equivalent to the constraints of the shortest-path problem. This implies that the linear optimization step appears in each iteration of the first order methods are actually shortest-path problems that can be solved efficiently using dynamic programming.
Recall that Frank-Wolfe algorithm is a first order method that in each of its iteration updates the new point toward a direction by calling a linear optimization oracle. This objective function of this linear optimization is in fact a linear approximation of the objective function of (<ref>), and (<ref>). Frank-Wolfe algorithm specifically results in a simple linearizations with integer solution for the image and video co-localization optimization problems. For the image model, the linearlized cost function is separable for each image, and we can efficiently find the best integer solution with some threshold for this problem. For the video model also, the cost function and the constraints are separable for each video and optimizing the linearized function over the feasible region results in the shortest-path problem for each video.
In the following section we will propose an algorithm that can be applied on image and video co-localization optimization problems efficiently and we finally compare the performance of the proposed algorithm to the algorithms that are applied to these problems.
§ PROPOSED ALGORITHMS
Conditional Gradient Sliding (CGS) algorithm <cit.>, is a first order projection free method for solving convex optimization problems in which the feasible region is a convex and compact set. The major advantage of the CGS algorithm is that it skips gradient evaluation from time to time and uses the same information within some inner iterations. This property of the CGS algorithm becomes helpful when the dimension of the problem as size of the variable is relatively large and computations become more and more expensive.
As showed in previous chapters, CGS algorithm and its proposed variant, Conditional Gradient Sliding with Linesearch (CGS-ls) perform very well in many practical instances. Although the CGS and CGS-ls algorithms out-perform the Frank-Wolfe (FW) algorithm many cases, the variants of FW, such as Away-steps FW or Pairwise FW <cit.> converge faster to the optimal value than CGS for the image and video co-localization problem as we will show this in numerical experiments later in this chapter.
Motivated from the CGS algorithm and also Away-steps and pairwise FW methods, we propose an algorithms called Away-Steps Conditional Gradient Sliding (ACGS) and Pairwise Conditional Gradient Sliding (PCGS) that perform very well for image and video co-localization problems. ACGS and PCGS methods have iterations of the CGS method but the direction to update the new point in each iteration is motivated from the away steps and pairwise steps in the Away-steps and Pairwise FW. We will also show that the ACGS and PCGS out-perform all of the variants of the FW applied to the image and Video co-localization problem.
§.§ Away-Steps and Pairwise Conditional Gradient Sliding
The basic scheme of the ACGS and PCGS methods is obtained by performing a new search direction in CGS method, if the new direction leads the algorithm to smaller Wolfe gap. Also, similar to the CGS algorithm, the classical FW method (as ℱ𝒲 procedure) is incorporated in this algorithm to solve the projection subproblems in the accelerated gradient (AG) with some approximations. The ACGS and PCGS algorithms are described as in <ref> and <ref>.
Note that the purpose of the proposed algorithm is to be applied to the image and video co-localization problems (<ref>) and (<ref>). The objective function in both problems, as discussed before, are convex functions, and the feasible region is a set of finite binary vectors called atoms in ^d for some d. We denote this set by 𝒜 and its convex hull conv(𝒜) by ℳ. As 𝒜 is finite, ℳ is a polytope.
The first difference between the AGCS(PCGS) and the CGS method is that we incorporate the set 𝒮^(k) of active atoms in the ACGS(PCGS) algorithm. This set keeps record of atoms (integer points) in 𝒜 that are being used for the away direction d_K^away at each iteration such that the point y$̨ at current iteration is the sum of corners in𝒮^(k)reweighted byα^(k). This direction that is given in (<ref>), is defined by finding the atomv_kin𝒮^(k)that maximized the potential of descent given by-f'(y), y- v. Note that obtainingv$̨ in (<ref>) is fundamentally easier as the linear optimization is over the 𝒮^(k), the active set of possibly small finite set of points.
The second difference is in the way we update the step-size to update the new iteration point. As we observe in (<ref>) we incorporate a line-search method to obtain a step-size with maximum reduction in the objective toward a prespecified direction from the point at current iteration. With _max defined in (<ref>) and (<ref>) as the maximum step-size for the line-search step the algorithm guarantees that the new iterates y=̨ y +_max d_k^away stays feasible in each iteration. Note that the parameter _k in CGS algorithm is required to be set up in appropriate way to maintain the feasibility in each iteration. Such set ups are represented in <cit.> as =̨ 3/(k+2) and =̨ 2/(k+1) and in fact, we can us these set ups for CGS steps in step (<ref>) as the upper bound for γ_k instead of 1 in line-search step (<ref>). Also, it is easy to check that for the special case of the image and video co-localization problem in which the objective is a convex quadratic function $̨ in step (<ref>) has the closed form
=̨ -d^T ∇ f(x)/d^T Q d,
ifQ≽0is the quadratic term in the objective. This value is projected to 0 or_maxif is outside of the range[0, _max]for (<ref>) case.
Finally, we incorporate the Wolfe gap as an stopping criterion in the ACGS and PCGS algorithms. In fact, at steps (<ref>) and (<ref>), the algorithms checks if they have reached the given threshold to stop before the preset max number of iterationsN. As in classical FW, the Wolfe gap is an upper bound on the unknown suboptimality and from the convexity of the objectivefwe have
f(x_k) - f(x^⋆) ≤-f'(x)̨, x^⋆-y≤-f'(x)̨, x-̨y≤ϵ.
Note that for the image and video co-localization problem with binary decision variables in a CGS step we have
𝒮^(k+1) = {[ {x_k} if =̨ 1; 𝒮^(k)∪{x}̨ otherwise. ].
Also, forv∈𝒮^(k)∖{s_k}we have
α_s_t^(k+1):=(1-)̨α_s_t^(k) + and α_v^(k+1):= (1-)̨α_v^(k).
On the other hand, for an away step we have
𝒮^(k+1) = {[ 𝒮^(k)∖{v}̨ if =̨_max; 𝒮^(k) otherwise. ].
This step is called a drop step. Also, forv∈𝒮^(k)∖{v_k}we have
α_v_t^(k+1):=(1+)̨α_v_t^(k) + and α_v^(k+1):= (1+)̨α_v^(k).
ACGS and PCGS algorithms are slightly different in the direction that they use to update the new point at each iteration. More precisely, steps (<ref>) to (<ref>) in Algorithm <ref> are replaced with steps (<ref>) and (<ref>) in Algorithm <ref>. Similar to the Paiwise FW, the idea here is to only move weight from the away atomv$̨ to the CGS atom x$̨ and keep all otherαweight unchanged. In other words
α_v_t^(k+1):=α_v_t^(k) - and α_x^(k+1):= α_s^(k)+,
for some≤_max:=α_v_t^(k).
An important property of the formulation (<ref>) and (<ref>) is that their constraints are separable for each image and video. This helps computation to be more efficient if we use parallel computing. This, however, is a property of any first-order method and practically it is very memory efficient. In addition, as a solution to the convex relaxation is not necessarily an integer solution optimal or feasible to the original problem, we need to come up with a solution as close as possible to the obtained relaxation optimum. In image and video co-localization case, the most natural way of finding such a solution is to solve
min_p∈𝒫 p - y_2^2,
where𝒫is the feasible region of the original problem andyis the solution to the relaxed problem. It is easy to check that the projection problem (<ref>) is equivalent to
max_p∈𝒫 p,y,
which for the video model is just a shortest path problem that can be solved efficiently using dynamic programming.
§ EXPERIMENTAL RESULTS
In this section we experiment the proposed Algorithm <ref> to the problems introduced in (<ref>) and (<ref>) for image and video co-localization task. Recall that these problems are quadratic problems over the convex hull of paths in a network, the linear minimization oracle in first order methods is equivalent to find a shortest path in the network. We compare the performance of the proposed algorithm with the works in <cit.> and <cit.> on FW algorithm and its variants for the similar problem. For this comparison we reuse the codes available and shared for <cit.> and the included dataset of airplanes consist of 660 variables.
We begin this section by reviewing the performance of Away steps Frank-Wolfe (AFW) and its comparison to the solvers such as Gurobi and Mosek. These results are derived and shown in <cit.> and the goal in this section is to show how AFW outperforms other methods for our problem of interest. In <cit.>, however, Joulin A., Tang K., and Fei-Fei L. showed that their proposed Pairwise Frank-Wolfe (PairFW) algorithm outperforms any other variants of FW in solving this problem. We will end this section by showing that our proposed ACGS algorithm performs better any first order methods that have been utilized to solve the video co-localization problem.
§.§ FW v.s. Mosek and Gurobi
Algorithm <ref> is a variant of FW algorithm proposed in <cit.> in which the authors examined it on two datasets, the PASCAL VOC 2007 dataset <cit.> and the Youtube-Objects dataset <cit.>. This algorithm is in fact the AWF Algorithm introduced in <cit.> with some slight changes and some extra rounding steps. Also, the set𝒟in this algorithm is conv(𝒫)the convex hull of the feasible region of problems (<ref>) or (<ref>). Their implementation of Algorithm <ref> was coded in MATLAB and they compare it to two standard Quadratic Programming (QP) solvers, Mosek and Gurobi on a single-core 2.66GHz Intel CPU with 6GB of RAM. In addition, they setμ=0.4for the image model andμ=0.6for the video model andμ_t=1.8andλ= 0.1, for both image and video models. They extracted 20 objectness boxes from each image and sample each video every 10 frames as there is little change frames in short amount time.
The stopping criterion of Algorithm <ref> is based on the relative duality gap. This criterion, that is given in function duality-gap(z) in the algorithm, is defined asd = (f-g)/g, wherefis the objective function andgis its dual. In the implementation of this algorithm, authors consider two values1e- 2 and1e- 3 for the stopping thresholdϵ.
Figures <ref> presents some comparisons of the Algorithm <ref> as a variant of FW algorithm with QP solvers Mosek and Gurobi in logarithmic scale. Indeed, this comparison is based on the CPU time performance of the algorithms depending on the number of images and videos, or in better words, the dimension of the decision variables. This time is the time that takes that algorithms reach a duality gap less than the thresholdϵ. As we can observe from these plots, the variant of FW algorithm with away steps outperforms the standard QP solvers Mosek and Gurobi.
The reason that we review and represent these comparisons directly from <cit.>local is that in our implementations in next section we will only compare our proposed algorithms to some other first order methods. These first order methods include the AWF algorithm that we already know from this section that it outperforms standard QP solvers.
The PASCAL Visual Object Classes 2007 dataset <cit.> provides standardized image data of 20 objects for object classes recognition along with annotations for images and bounding box and object class label for each object. Challenges and competitions have been used to recognize objects from a number of visual object classes in realistic scenes. The YouTube-Objects dataset <cit.> consists of YouTube videos collected for 10 classes from PASCAL <cit.>: "aeroplane", "bird", "boat", "car", "cat", "cow", "dog", "horse", "motorbike", and "train". Although authors in <cit.> did the study on multiple objects of this dataset, in our implementations our focus will be on the "aeroplane" object class.
§.§ Implementations
Knowing that AFW Algorithm <ref> outperforms the standard QP solvers Mosek and Gurobi from the works in <cit.>, in this section we compare our proposed variants of the CGS algorithm, the ACGS Algorithm <ref> and the PCGS Algorithm <ref> to some other first order methods, including the AFW method. More precisely, we will compare the performance of our algorithms to all of the variants of the FW namely, the FW, the FW Algorithm with away steps (AFW), and the pairwise FW Algorithm as discussed in <cit.>. We also compare our algorithms to the original CGS Algorithm <cit.>. These comparisons include the duality gap, CPU time, and objective function value versus the iterations.
The implementations are over the YouTube Objects dataset <cit.> explained in previous section, and specifically its "aeroplane" class. We obtain the dataset for this class and also the codes for AFW and Pairwise FW algorithms available in the repositories for <cit.>. We only consider the task of video co-localization with the problem formulation defined in (<ref>) for this implementation. All algorithms are coded in MATLAB and run on a computer with Interl Core i5-6500 CPU 3.2 GHz processor with 16 GB of RAM.
In our implementations, we set all algorithms to stop either after the maximum number of iterations or after reaching the Wolfe duality gap threshold. We set the threshold toϵ=1e-5and the max number of iterations to 2000 iterations. All of the parameters exist in (<ref>) are set the same as in <cit.> for consistency in the comparison.
Note that both original versions of FW and CGS algorithms do not reach the desired duality gap before the preset 2000 max number of iterations. Also, the AFW algorithm takes 628 iterations, the Pairwise FW takes 436 iterations, the ACGS takes 84 iterations, and PCGS takes 82 iterations to reach the threshold for the duality gap.
As we observe in Figure <ref> both proposed variants of CGS algorithm, the ACGS and PCGS algorithms outperform the FW algorithms and its variants as well as the original CGS algorithm. The performance of the algorithms in terms of the CPU time versus iterations increments also is represented in Figure <ref>. As we observe in this figure the CPU time per iteration of AFW and ACGS and PCGS are quite similar, although the ACGS and PCGS algorithms reach the gap much earlier than the AFW algorithm.
In addition, while FW algorithm requires one linear optimization oracle per iteration, its CPU time per iteration is not significantly better than the other algorithms. Also, note that out of 84 iteration of the ACGS algorithm, it chooses the away direction in 34 iteration which improves the performance of CGS (with more than 2000 iterations) for this problem significantly.
Finally, authors in <cit.> proved, for the first time, the global linear convergence of the variants of FW algorithms, AFW and Pairwise FW, under strong convexity of the objective. One potential research work related to the current chapter is figure out the convergence of the proposed algorithms <ref> and <ref>.
CGS:Lan
Lan, Guanghui, and Yi Zhou. "Conditional gradient sliding for convex optimization." SIAM Journal on Optimization 26.2 (2016): 1379-1409
Nesterov
Nesterov, Y.: Introductory lectures on convex optimization: A basic course, vol. 87. Springer Science & Business Media (2013)
joulin2014efficient
Joulin, A., Tang, K., Fei-Fei, L.: Efficient image and video co-localization with frank-wolfe algorithm. In: European Conference on Computer Vision, pp. 253–268. Springer (2014)
tang2014co
Tang, K., Joulin, A., Li, L.J., Fei-Fei, L.: Co-localization in real-world images. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1464–1471 (2014)
alexe2012measuring
Alexe, B., Deselaers, T., Ferrari, V.: Measuring the objectness of image windows. IEEE trans-actions on pattern analysis and machine intelligence 34(11), 2189–2202 (2012)
boykov2001fast
Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence 23(11), 1222–1239 (2001)
delong2012minimizing
Delong, A., Gorelick, L., Veksler, O., Boykov, Y.: Minimizing energies with hierarchical costs. International journal of computer vision 100(1), 38–58 (2012)
delong2012fast
Delong, A., Osokin, A., Isack, H.N., Boykov, Y.: Fast approximate energy minimization with label costs. International journal of computer vision 96(1), 1–27 (2012)
lowe2004distinctive
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International journal of computer vision 60(2), 91–110 (2004)
perazzi2012saliency
Perazzi, F., Krauhenbuhl., Pritch, Y., Hornung, A.: Saliency filters: Contrast based filtering for salient region detection. In: 2012 IEEE conference on computer vision and pattern recognition, pp. 733–740. IEEE (2012)
shi2000normalized
Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence 22(8), 888–905 (2000)
belkin2003laplacian
Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation 15(6), 1373–1396 (2003)
bach2007diffrac
Bach, F., Harchaoui, Z.: Diffrac: a discriminative and flexible framework for clustering. Advances in Neural Information Processing Systems 20 (2007)
xu2004maximum
Xu, L., Neufeld, J., Larson, B., Schuurmans, D.: Maximum margin clustering. Advances in neural information processing systems 17 (2004)
joulin2010discriminative
Joulin, A., Bach, F., Ponce, J.: Discriminative clustering for image co-segmentation. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1943–1950. IEEE (2010)
hastie2009elements
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition. Springer Series in Statistics. Springer (2009)
babenko2010robust
Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE transactions on pattern analysis and machine intelligence 33(8), 1619–1632 (2010)
berclaz2011multiple
Berclaz, J., Fleuret, F., Turetken, E., Fua, P.: Multiple object tracking using k-shortest paths optimization. IEEE transactions on pattern analysis and machine intelligence 33(9), 1806–1819 (2011)
yilmaz2006object
Yilmaz, A., Javed, O., Shah, M.: Object tracking: A survey. Acm computing surveys (CSUR) 38(4), 13–es (2006)
tang2012shifting
Tang, K., Ramanathan, V., Fei-Fei, L., Koller, D.: Shifting weights: Adapting object detectors from image to video. Advances in Neural Information Processing Systems 25 (2012)
perez2002color
Perez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: European Conference on Computer Vision, pp. 661–675. Springer (2002)
pang2013finding
Pang, Y., Ling, H.: Finding the best from the second bests-inhibiting subjective bias in evaluation of visual tracking algorithms. In: Proceedings of the IEEE International Conference on omputer Vision, pp. 2784–2791 (2013)
harestructured
Hare, S., Saffari, A., Torr, P., Struck, S.: Structured output tracking with kernels. In: IEEE International Conference on Computer Vision. IEEE, pp. 263–27
lacoste2015global
Lacoste-Julien, S., Jaggi, M.: On the global linear convergence of frank-wolfe optimization variants. Advances in neural information processing systems 28 (2015)
everingham2010pascal
Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual
object classes (voc) challenge. International journal of computer vision 88(2), 303–338 (2010)
prest2012learning
Prest, A., Leistner, C., Civera, J., Schmid, C., Ferrari, V.: Learning object class detectors from weakly annotated video. In: 2012 IEEE Conference on computer vision and pattern recognition,
pp. 3282–3289. IEEE (2012) |
http://arxiv.org/abs/2307.05576v1 | 20230710121906 | Bulk viscous universe with cosmological constant | [
"Athira Sasidharan",
"Titus K Mathew"
] | gr-qc | [
"gr-qc"
] |
Bulk viscous universe with cosmological constant
Athira Sasidharan^* and Titus K Mathew^+
e-mail:[email protected], [email protected]
^*Department of
Physics, NSS Hindu College, Changanacherry, Kerala, India
^+Department of
Physics, Cochin University of Science and Technology, India,.
In this paper we consider dissipative effects in ΛCDM model, i.e., we consider a universe with cosmological constant having viscous matter. We assume the most general form for bulk
viscous coefficient,
ζ=ζ_0+ζ_1ȧ/a+ζ_2ä/ȧ and obtained various constrains for ζ's . We also studied the background study of the model with ζ=ζ_0 and ζ=ζ_1ȧ/a. Extracted the value of ζ_1 using the Pantheon data and also obtained its thermodynamic evolution and the age.
§ INTRODUCTION
Since the discovery of accelerating universe <cit.>, active research is been taking place looking for the cause producing the acceleration and also for a model that would incorporate this acceleration. To the present, there are many models that fits this acceleration. Of these, the simplest and the most successful is the interpretation of dark energy as the cosmological constant. However, the discrepancy between the observed and calculated values of dark energy density, known as cosmological constant problem<cit.>, and unexplained coincidence of two dark sectors-dark energy and dark matter, known as Cosmic Coincidence problem <cit.>, make rooms for other models in explaining the current acceleration. Some of these models include quintessence
<cit.>, k-essence <cit.> and perfect fluid
models (like Chaplygin gas model) <cit.>, f(R) gravity <cit.>, f(T)gravity <cit.>, Gauss-Bonnet theory <cit.>, Lovelock
gravity <cit.>, Horava-Lifshitz gravity <cit.>,
scalar-tensor theories <cit.>, braneworld models
<cit.> etc.
A less complicated unified dark energy model is the bulk viscous models. In <cit.>, bulk viscous matter dominated universe is considered and was found that this viscosity alone can produce acceleration in the expansion of the universe. Phase space analysis of this model indicates that only viscosity with constant bulk viscous coefficinet predicts all the conventional phases of the universe i.e., a prior radiation dominated phase, followed by a decelerated matter dominated phase and then finally evolving to a de Sitter type universe <cit.>. A bayesian analysis of this model shows that it is not so superior over the ΛCDM model, but have only a slight advantage over it<cit.>. However Maartens <cit.> has pointed out that these viscous models violates near equilibrium condition (NEC),
Π/P≪ 1
There are works <cit.> showing that Λ is an inevitable content of the universe. The matter content of the universe has disspations so it is worth full to consider a universe filled wth viscous matter having a cosmological constant <cit.>. Also recent papers <cit.> showed that introducing Λ with viscosity can attain this NEC. We neglect the other dissipative phenomena like shear viscosity as it is inconsistent with the isotropic nature of the universe. So the only viscosity component to be considered is the bulk viscosity.
In this paper we first analyse the basic formalism of the bulk viscous matter dominated universe with cosmological constant. We consider the general form for the bulk viscous coefficient and using Eckart formalism, obtain the expression for Hubble Parameter and the scale factor. We also analyse the equation of state parameter and the deceleration parameter and from the behavior of these parameters, constrains on the viscous parameters was obtained. In section <ref>, we did the background study of the model for constant viscosity and constrains on the parameter is also obtained. We also analysed the age, thermodynamic behavior and asymptotic behavior of the model. In section <ref>,we consider the viscous coefficient as a function of Hubble parameter, i.e., ζ=ζ_1 H, extracted the value of ζ_1 and studied the background evolution and cosmological parameters and the age of the universe. In section <ref>, the results and conclusion are discussed.
§ VISCOUS MATTER WITH COSMOLOGICAL CONSTANT
We consider a spatially flat universe described by FLRW metric. We assume that the universe contains viscous matter (both dark and baryonic) and cosmological constant as dark energy. We neglect the radiation component since its percentage composition is very small and also we are dealing with the late time acceleration. Eckart formalism <cit.> is used for the bulk viscous pressure and is given by,
P^*=P-3ζ H
where P is the normal pressure, which we assume as zero for the whole matter component of the universe (both dark and baryonic) and ζ is the coefficient of bulk viscosity. So the effective pressure will only be that from the bulk viscosity. The coefficient ζ is basically a transport coefficient, hence it would depend on the dynamics of the cosmic fluid. Since the exact form of ζ is unknown, we consider the most general form for the bulk viscous coefficient ζ <cit.>, which is a linear combination of the three terms as,
ζ=ζ_0+ζ_1ȧ/a+ζ_2ä/ȧ
The first term is a constant ζ_0, the second term is proportional to the Hubble
parameter, which characterizes the dependence of the bulk viscosity on velocity, and the third is proportional to ä/ȧ, characterizing the effect of acceleration on the bulk viscosity.In terms of Hubble parameter
H=ȧ/a, this can be written as,
ζ=ζ_0+ζ_1H+ζ_2(Ḣ/H+H)
The Friedmann equations governing the bulk viscous universe with cosmological constant are given as,
H^2=ρ_m+ρ_Λ/3
2ä/a+(ȧ/a)^2=ρ_Λ-P^*
where we have taken 8π G = 1, ρ_m and ρ_Λ=Λ/8π G are the densities of matter and cosmological constant Λ, respectively and overdot represents the derivative with respect to cosmic time t. We consider separate conservation equations for matter and dark energy and are given below,
ρ̇_m+3H(ρ_m+P^*)=0.
ρ̇_Λ=0
where we have assumed a constant equation of state for Λ, ω_Λ=-1. Using the Friedmann equations (<ref>) and (<ref>) and using equations. (<ref>) and (<ref>), we get the differential equation for the Hubble parameter as,
Ḣ=1/2-ζ̃_2(ζ̃_0 HH_0+(ζ̃_1+ζ̃_2-3)H^2+3H_0^2 Ω_Λ 0)
where we have defined the dimensionless bulk viscous parameters ζ̃_0, ζ̃_1, ζ̃_2 as,
ζ̃_0=3ζ_0/H_0, ζ̃_1=3ζ_1, ζ̃_2=3ζ_2
H_0 is the
present value of the Hubble parameter and Ω_Λ 0 is the present density parameter of dark energy. Integrating equation (<ref>) we can get the expression for the Hubble parameter as,
H=H_0[(y+ζ̃_0)(y-2(ζ̃_1+ζ̃_2-3)-ζ̃_0)e^H_0(t-t_0)y/2-ζ̃_2-(y-ζ̃_0)(y+2(ζ̃_1+ζ̃_2-3)+ζ̃_0)/2(ζ̃_1+ζ̃_2-3)(e^H_0(t-t_0)y/2-ζ̃_2(-y+2(ζ̃_1+ζ̃_2-3)+ζ̃_0)-(y+2(ζ̃_1+ζ̃_2-3)+ζ̃_0))]
where y=√(ζ̃_0^2-12Ω_Λ 0(ζ̃_1+ζ̃_2-3)) and t_0 is the present cosmic time. As t-t_0→∞, H→ H_0[y+ζ̃_0/2(ζ̃_1+ζ̃_2-3)], a constant provided ζ̃_2<2. When t-t_0 is small, H evolves as H_0[2(2-ζ̃_2)+H_0(t-t_0)(ζ̃_0+6Ω_Λ 0+y)/2(2-ζ̃_2)+H_0(t-t_0)(y-2(ζ̃_1+ζ̃_2-3)-ζ̃_0)].
Using the definition of the Hubble parameter, we could obtain the expression for the scale factor from equation (<ref>) as,
a=e^H_0(t-t_0)(y-ζ̃_0)/2(ζ̃_1+ζ̃_2-3)[y+2(ζ̃_1+ζ̃_2-3)+ζ̃_0+e^H_0(t-t_0)y/2-ζ̃_2(y-2(ζ̃_1+ζ̃_2-3)-ζ̃_0)/2y]^ζ̃_2-2/ζ̃_1+ζ̃_2-3
When Ω_Λ 0=0, the scale factor reduces to
a(t)=[(ζ̃_0+ζ̃_12-3/ζ̃_0)+(3-ζ̃_12/ζ̃_0)
e^ζ̃_0/2-ζ̃_2H_0(t-t_0)]^2-ζ̃_2/3-ζ̃_12
which is the expression obtained in <cit.>. When t-t_0 is small, the scale factor evolves as
a∼[1+H_0(t-t_0)(y-ζ̃_0)/2(ζ̃_1+ζ̃_2-3)][1+H_0(t-t_0)/2-ζ̃_2(y-2(ζ̃_1+ζ̃_2-3)-ζ̃_0)]^ζ̃_2-2/ζ̃_1+ζ̃_2-3
When t-t_0 is very large, from the expression of scale factor we see that it will increases exponentially.
§.§ Equation of state and Deceleration parameter
The equation of state parameter ω and the deceleration parameter q can be obtained using the following relation,
ω=-1-2/3Ḣ/H^2
q=-1-Ḣ/H^2
Using the expression (<ref>) and (<ref>), we get the expressions for ω and q as,
ω=-1+2y^2 (ζ̃_0+ζ̃_1+ζ̃_2-3+3 Ω _Λ 0) /3 (ζ̃_2-2) (Sinh[H_0 (t-t_0) y/2 (2-ζ̃_2)] (ζ̃_0+6 Ω _Λ 0)+Cosh[H_0 (t-t_0) y/2 (2-ζ̃_2)] y)^2
q=-1+y^2(ζ̃_0+ζ̃_1+ζ̃_2-3+3 Ω _Λ 0)/(ζ̃_2-2) (Sinh[H_0 (t-t_0) y/2 (2-ζ̃_2)] (ζ̃_0+6 Ω _Λ 0)+Cosh[H_0 (t-t_0) y/2 (2-ζ̃_2)] y)^2
The present value of ω and q can be obtained by putting t=t_0 and are,
ω_0=2ζ̃_0+2ζ̃_1-ζ̃_2+6Ω_Λ 0/3(ζ̃_2-2)
q_0=ζ̃_0+ζ̃_1-1+3Ω_Λ 0/ζ̃_2-2
The present universe will be accelerating only if 3ω_0+1<0 and q_0<0 and for the universe to be in quintessence region and to avoid big rip, it should satisfy the relation q_0>-1. Using these conditions and from the behaviour of the Hubble parameter and the scale factor, for a universe to begin from the big bang and then entering it to decelerated epoch and then making a transition to the accelerated epoch in the past, a set of conditions has to be satisfied by the ζ̃'s. These conditions are,
* ζ̃_0>0, ζ̃_2<2, ζ̃_0+ζ̃_1>1-3Ω_Λ 0, ζ̃_1+ζ̃_2<3, ζ̃_0+ζ̃_1+ζ̃_2<3-3Ω_Λ 0
* ζ̃_0<0, ζ̃_2>2, ζ̃_0+ζ̃_1<1-3Ω_Λ 0, ζ̃_1+ζ̃_2>3, ζ̃_0+ζ̃_1+ζ̃_2>3-3Ω_Λ 0
If we neglect the cosmological constant i.e., Ω_Λ 0=0, then these would reduce to the conditions obtained in the reference <cit.>.
§ WITH CONSTANT BULK VISCOSITY
Let us consider the case when bulk viscous coefficient is a constant, i.e., when ζ=ζ_0 . The expression for Hubble parameter becomes,
H=H_0y-ζ̃_0-6 Ω _Λ0+e^1/2 H_0 (t-t_0) y(y+ζ̃ _0+6 Ω _Λ0)/y+ζ̃ _0-6+e^1/2 H_0 (t-t_0) y(y-ζ̃ _0+6)
where y=√(ζ̃_0^2+36Ω_Λ 0)
Similarly, one could obtained the expression for scale factor for constant ζ as,
a=e^1/6 H_0(t-t_0) (ζ̃ _0-y)((y+ζ̃ _0-6)+e^H_0(t-t_0)y/2( y-ζ̃ _0+6)/2 y)^2/3
Similarly, the corresponding equation of state and the deceleration parameter for constant viscosity becomes,
ω=(-1-(ζ̃ _0-3+3 Ω _Λ 0) y^2/3(y Cosh[1/4 H_0 (t-t_0) y]+(ζ̃ _0+6 Ω _Λ 0) Sinh[1/4 H_0 (t-t_0) y])^2)
q=(-1-(ζ̃ _0-3+3 Ω _Λ 0) y^2/2(y Cosh[1/4 H_0 (t-t_0)y]+(ζ̃ _0+6 Ω _Λ 0) Sinh[1/4H_0 (t-t_0)y])^2)
As mentioned before, for an accelerating universe, the present value of equation of state ω_0<-1/3 and the present value of the deceleration parameter q_0<0. To avoid big rip, the equation of state parameter ω_0>-1, above the phantom limit. These conditions help us to constrain the value of ζ̃_0 as
1-3Ω_Λ 0<ζ̃_0<3(1-Ω_Λ 0).
From observation Ω_Λ is constrained in the range 0.65-0.75 <cit.>. This constrains the ζ̃_0 in between -1.25<ζ̃_0<1.05.
§.§ Age of the universe
Age of the universe in this case can be obtained by equating a=1 in the equation (<ref>) and is
found to be,
Age≡(2/H_0 y)Log[1-2 y/6+y-ζ̃_0].
The plot of age of the universe for different values of (ζ̃_0,Ω_Λ) subjected to
the constrain (<ref>) are shown in the figure (<ref>).
The age plot shows reasonably good agreement for (ζ_0,Ω_Λ)=(-0.5,0.7) but the agreement with respect
(ζ_0,Ω_Λ)=(0.1,0.68) is slightly less and for the third choice it is not in nice agreement.
But corresponding to the best agreement pair the viscosity is negative. Whether is physically feasible or not may evident from
the further considerations of the entropy evolution and dynamical system behaviour.
§.§ Thermodynamics
We now check the validity of the Generalized second law and maximization of entropy condition in this case.
Assuming apparent horizon as the boundary of the universe and obtaining the horizon entropy using the
Bekenstein relation and matter entropy using the Gibbs equation, we calculated the expression for the first derivative and second derivative of the total
entropy with respect to time. The relation obtained are as follows:
Ṡ=64 π ^2e^t' ỹb^2 ỹ^4 (ỹ-6+ζ̃_0 +e^1/2 t' ỹ (ỹ+6-ζ̃_0))/H_0 (ỹ-ζ̃_0 -6 Ω_Λ +e^1/2 t' ỹ (6 Ω_Λ +ỹ+ζ̃_0))^5,
S̈=-384 π ^2 b^2 ỹ^5 e^3/2t' ỹ(b ỹ+2 (1+Ω_Λ) ỹCosh[1/2t' ỹ]+2 d Sinh[1/2t'ỹ])/((-1+e^1/2t'ỹ)ζ̃_0 -6 Ω_Λ+ỹ+e^1/2t'ỹ (6 Ω_Λ +ỹ))^6,
where b=ζ̃_0+3Ω_Λ 0-3, d=ζ̃_0+12 Ω_Λ-ζ̃_0Ω_Λ and t'=H_0(t-t_0).
The evolution of Ṡ and S̈ with respect to the scale factor for different values of Ω_Λ and ζ̃_0 subjected to the constrain (<ref>) are plotted and are shown in figures (<ref>) and (<ref>) respectively
From the figures, it is clear that GSL and maximization of entropy condition is valid for the model.
§.§ Phase space analysis
We also try to study the asymptotic behavior of the model. We chose u and v as the phase space variables defined as
u =Ω_m=ρ_m/3H^2,
v =1/H_0/H+1,
which varies in the range 0≤
u≤1 and 0≤ v≤1.
Using the conservation equation and differential equation for Hubble parameter, we can obtained the autonomous equations for u and v as,
u' =(1-v)/v^2(v(1-u)ζ̃_0 -3Ω_Λ u (1-v)),
v' =(1-v)/2 v(3Ω_Λ(1-v)^2+ζ̃_0 v(1-v)-3v^2).
There are three critical points for the above autonomous equation and the corresponding eigen values are listed in the Table <ref>.
Inorder to represent a universe with unstable matter dominated phase and a stable, physically feasible accelerated
phase we see that ζ̃_0 must be positive subjected to the constrain (<ref>).
In determining the age corresponding to this model we have noted that, the best fit have arised both with negative value
of ζ_0 and also with positive value (the black line in the age plot) of ζ_0. But the asymptotic analysis presented
here, however supports only a positive value for ζ_0. Earlier in the analysis without cosmological constant also we conclude
that, the case with ζ=ζ_0 is preferred over other cases. Thus even though the age
prediction has been changed slightly, the present model is also predicting a conventional evolution of the universe with
constant viscosity as in the case of the model without cosmological constant.
§ WITH Ζ=Ζ_1H
Let us consider another special case of ζ=ζ_1 H. So here ζ depends only on the velocity component of the expansion of the universe. The expression for the Hubble Parameter and the scale factor are as follows,
H=-√(3) H_0Ω_Λ 0(6-2 ζ̃ _1-2√(3(3-ζ̃ _1)Ω_Λ 0)+2e^ H_0(t-t_0) √(3(3-ζ̃ _1)Ω_Λ 0)(3-ζ̃ _1+√(3(3-ζ̃ _1)Ω_Λ 0)))/√((3-ζ̃ _1)Ω_Λ 0)(6-2 ζ̃ _1-2√(3(3-ζ̃ _1)Ω_Λ 0)-2 e^ H_0(t-t_0) √(3(3-ζ̃_1)Ω_Λ 0) (3-ζ̃ _1+ √(3(3-ζ̃ _1)Ω_Λ 0)))
ł
a=12^1/ζ̃_1-3e^-√(3) H_0 (t-t_0)Ω_Λ 0/√((3-ζ̃_1)Ω_Λ 0)(ζ̃_1-3+√(3(3-ζ̃_1)Ω_Λ 0)+e^ H_0(t-t_0)√(3(3-ζ̃_1)Ω_Λ 0)(3-ζ̃_1+√(3(3-ζ̃_1)Ω_Λ 0))/√((3-ζ̃_1)Ω_Λ 0))^2/3-ζ̃_1
From the expression of Hubble parameter and the scale factor, we see that inorder to represent the conventional behavior of the universe, ζ̃_1 should be less than 3. In this case one could obtain the expression for the Hubble parameter in terms of the scale factor a. And it is found to be,
H=H_0√([a^ζ̃_1-3(ζ̃_1-3+3Ω_Λ 0)-3Ω_Λ 0/ζ̃_1-3])
Since a direct relation between the Hubble parameter H and the scale factor a is found out, it is possible to extract the value of ζ_1.
§.§ Extraction of ζ̃_1
To extract the value of ζ̃_1, we use the latest Pantheon Type Ia Supernova data consisting of 1048 data points.. The method used is the χ^2 minimization technique and is defined as,
χ^2≡∑^n_k=1[μ_t-μ_k]^2/σ_k^2,
where μ_k is the observational distance modulus for the k-th
Supernova (obtained from the data) with red shift z_k, σ_k^2 is the variance of the measurement,
n is the total number of data and μ_t is the theoretical distance modulus for the
k-th Supernova with the same redshift z_k, which is given as
μ_t=m-M=5log_10[d_L/Mpc]+25
where, m and M are the apparent and absolute magnitudes of the
SNe respectively. d_L is the luminosity distance and is defined as
d_L=c(1+z)∫_0^zdz'/H,
where c is the speed of light. Using the expression for H from equation (<ref>), we construct the χ^2 function.
We extract the values of Ω_Λ 0 and H_0 along with ζ̃_1. The values are given in the table below <ref>.
§.§ Evolution of equation of state parameter and deceleration parameter
The expression for the equation of state parameter and the deceleration parameter for this model can be obtained by making ζ̃_0=ζ̃_2=0 in the equations (<ref>) and (<ref>) respectively.
ω=-1-(ζ̃_1-3)(ζ̃_1-3+3 Ω_Λ 0)/(√(3(ζ̃_1-3))Cos[1/2 H_0 (t-t_0) √(3Ω_Λ 0(ζ̃_1-3))] +3√(Ω_Λ 0)Sin[1/2H_0 (t-t_0) √(3Ω_Λ 0(ζ̃_1-3))] )^2
q=-1-3 (ζ̃_1-3)(ζ̃_1-3+3 Ω _Λ 0)/2 (√(3(ζ̃_1-3))Cos[1/2H_0(t-t_0)√(3Ω _Λ 0(ζ̃_1-3))] +3 √(Ω_Λ 0)Sin[1/2√(3) H_0 (t-t_0) √(ζ̃_1-3)√(Ω_Λ 0)] )^2
The equation of state parameter ω and the deceleration parameter q, in terms of scale factor are given as,
ω=9 a^3 Ω_Λ 0 -a^ζ̃_1ζ̃_1 (ζ̃_1 -3+3 Ω_Λ 0)/-9 a^3 Ω_Λ 0 +3 a^ζ̃_1 (ζ̃_1 -3+3 Ω_Λ 0),
q=-1-a^ζ̃_1 (-3+ζ̃_1) (ζ̃_1 -3+3 Ω_Λ )/-6 a^3 Ω_Λ +2 a^ζ̃_1 (ζ̃_1-3+3 Ω_Λ ).
The plot of ω and q for the best estimated values of ζ̃_1 and Ω_Λ are shown
in the figures <ref> and <ref> respectively.
The equation of state is zero in the recent past. It decreases to the
negative values and finally saturated at ω=-1 corresponding to a de Sitter epoch in the extreme future. The evolution of the deceleration parameter starts from around q ∼ 0.5 in the past, which corresponds
to decelerated epoch and decreasing as the universe expands. It saturates at q=-1 corresponding the future de Sitter phase.
The present value of ω and q can be obtained by putting a=1 in the expressions given by equation (<ref>) and (<ref>), respectively and are obtained as,
ω_0=-ζ̃_1 /3-Ω_Λ,
q_0=1/2 (1-ζ̃_1 -3Ω_Λ ).
Using the best estimated values of ζ̃_1 and Ω_Λ, we get ω_0=-0.867033 and q_0=-0.80055, which is near to concordance value obtained by WMAP observation.
§.§ Age of the universe
The age of the universe in this model can be obtained by equating the scale factor (equation (<ref>)) to one and is found to be
Age≡Log[3-ζ̃_1-√(3)√((3-ζ̃_1) Ω _Λ 0)/3-ζ̃_1+√(3)√((3-ζ̃_1) Ω _Λ 0)]/√(3) p √(-(-3+ζ̃_1) Ω _Λ 0).
Using the best estimated values for ζ̃_1 and Ω_Λ, the age is found to 18.44Gyr and is matching with
the concordance value of the age of the universe obtained from the oldest globular observations. In this way the model is promising
in predicting the age.
§ CONCLUSION
We analyse a universe with a cosmological constant and bulk viscous matter. By considering the general form for ζ=ζ_0+ζ_1ȧ/a+ζ_2ä/ȧ, we obtain the constrains of the viscous parameters by finding the evolution of Hubble parameter, scale factor and cosmological parameters.
Two special cases for the viscous coefficient ζ, ζ=ζ_0, a constant and ζ=ζ_1 H, depending on the velocity of the expanding universe are considered. For ζ=ζ_0, for the constrain is -1.25<ζ̃_0<1.05. It is also found out that under this constrain the age of the universe is in accordance with the galactic observations. GSL and maximization of entropy condition are also found to be valid for the model.
For ζ=ζ_1H, the value on ζ_1 is extracted using pantheon data and is found to be 0.351. The present value of deceleration parameter and equation of state is found to be q_0=-0.80055 and ω_0=-0.867033, respectively, which is near to concordance value obtained by WMAP observation. The age is found to 18.44Gyr and is matching with the observations.
The addition of cosmological constant in the bulk viscous matter dominated universe improves age of the universe as well as other cosmological parameters.
Riess1
A. G. Riess et al., Observational Evidence from Supernovae for
an Accelerating Universe and a Cosmological Constant, Astron.
J., 116, 1009 (1998).
Perl1
S. Perlmutter et al., Measurements of Ω and Λ
from 42 High-Redshift Supernovae, Astrophys. J., 517, 565 (1999).
Bennet1
C. L. Bennett et al., First-Year Wilkinson Microwave
Anisotropy Probe (WMAP) Observations: Preliminary Maps and Basic
Results, Astrophys. J. Suppl. Ser., 148, 1 (2003).
Tegmark1
Tegmark et al., Cosmological parameters from SDSS and WMAP, Phys. Rev. D, 69, 103501 (2004).
Seljak
Seljak et al., Cosmological parameter analysis including SDSS
Ly forest and galaxy bias: Constraints on the
primordial spectrum of fluctuations, neutrino mass, and dark
energy, Phys. Rev. D, 71, 103515 (2005).
Komatsu1
E. Komatsu et al., Seven-year Wilkinson Microwave Anisotropy
Probe (WMAP) Observations: Cosmological Interpretation,
Astrophys. J. Suppl. Ser., 192, 18 (2011).
Weinberg
S Weinberg., The cosmological constant ,
Rev. Mod. Phys., 61, 1 (1989).
Carroll
S M Carroll, ,
Living Rev. Rel. , 4, 1 (2001).
Zlatev
Zlatev, L. Wang and P. J. Steinhardt, Quintessence, Cosmic Coincidence and the Cosmological constant ,
Phys. Rev. Lett, 82, 896 (1999).
fujii
Yasunori Fujii, Origin of the gravitational constant and
particle masses in a scale-invariant scalar-tensor theory,
Phys. Rev. D, 26, 2580 (1982).
carroll
Sean M. Carroll, Quintessence and the Rest of the World:
Suppressing Long-Range Interactions, Phys. Rev. Lett., 81, 3067 (1998).
chiba1
Takeshi Chiba, Takahiro Okabe and Masahide Yamaguchi,
Kinetically driven quintessence, Phys. Rev. D, 62, 023511
(2000).
kamen1
Alexander Kamenshchik, Ugo Moschella and Vincent Pasquier, An
alternative to quintessence, Phys. Lett. B, 511 265 (2001).
capo1
Salvatore Capozziello, Curvature quintessence, Int. J.
Mod Phys D, 11 483 (2002).
ferraro1
R. Ferraro and F. Fiorini, Modified teleparallel gravity:
Inflation without an inflaton, Phys. Rev. D, 75 084031 (2007).
nojiri
Shin'ichi Nojiri, Sergei D. Odintsov, and Misao Sasaki,
Gauss-Bonnet dark energy, Phys. Rev. D, 71 123509 (2005).
pad2
T. Padmanabhan and D. Kothawala, Lanczos-Lovelock models of
gravity, Phys. Rep., 531 115 (2013).
horava1
Petr Hořřava, Quantum gravity
at a Lifshitz point, Phys. Rev. D, 79 084008 (2009).
amendola1
Luca Amendola, Scaling solutions in general nonminimal
coupling theories, Phys. Rev. D, 60 043501 (1999).
dvali1
Gia Dvali, Gregory Gabadadze and Massimo Porrati, 4D gravity
on a brane in 5D Minkowski space, Phys. Lett. B, 485 208
(2000).
fabris1
J. C. Fabris, and S. V. B. Gonçalves, and R.de Sá Ribeiro,
Bulk viscosity driving the acceleration of the Universe,
Gen. Relat. Gravit., 38 495 (2006).
li1
Baojiu Li and John D. Barrow, Does bulk viscosity create a
viable unified dark matter model?, Phys. Rev. D, 79 103521
(2009).
Hiplito1
W. S. Hipólito-Ricaldi and H. E. S. Velten, and W. Zimdahl,
Viscous dark fluid universe, Phys. Rev. D, 82 063507
(2010).
av1
Arturo Avelino and Ulises Nucamendi, Can a matter-dominated
model with constant bulk viscosity drive the accelerated expansion
of the universe?, JCAP, 04 006 (2009).
av2
Arturo Avelino and Ulises Nucamendi, Exploring a
matter-dominated model with bulk viscosity to drive the accelerated
expansion of the Universe, JCAP, 08 009 (2010).
Athira1
Athira Sasidharan and Titus K. Mathew, Bulk viscous matter and
recent acceleration of the universe, Eur. Phys. J. C, 75, 348 (2015).
Jerin1
N D Jerin Mohan, Athira Sasidharan and Titus K. Mathew,Bulk viscous matter and recent acceleration of the universe based on causal viscous theory, Eur. Phys. J. C, 77, 849 (2017).
Athira2
Athira Sasidharan and Titus K. Mathew, Phase space analysis of bulk viscous matter
dominated universe, JHEP, 06, 138 (2016).
Athira3
Athira Sasidharan, N. D. Jerin Mohan, Moncy V. John and Titus K. Mathew, Bayesian analysis of bulk viscous matter dominated universe, Eur. Phys. J. C, 78, 628 (2018).
Maartens
R Maartens, Dissipative Cosmology, Classical and Quantum Gravity, 12, 1455 (1995).
Gron
N. Mostafapoor and O Gron, Viscous ΛCDM universe models, Astrophys. Space Sci, 333, 357-368 (2011).
Cruz
N Cruz, E Gonzalez and J Jovel,
Study of a viscous ΛWDM model : Near-Equilibrium Condition, Entropy Production and Cosmological constrains,
Symmetry, 14 1866 (2022).
Cruz1
N Cruz, E Gonzalez and J Jovel,
Singularities and soft- Big Bang in a viscous ΛCDM model,
Phys. Rev. D, 105 024047 (2022).
Eckart1
Carl Eckart, The Thermodynamics of Irreversible Processes.
III. Relativistic Theory of the Simple Fluid, Phys. Rev.
58 (1940) 919.
weinberg2 S. Weinberg, Gravitation and
cosmology: principles and applications of the general theory of
relativity, John Wiley & sons Inc., New york U.S.A. (1972).
ren1
Jie Ren and Xin-He Meng, Cosmological model with viscosity
media (dark fluid) described by an effective equation of state,
Phys. Lett. B, 633 1 (2006).
Singh J.P. Singh, Pratibha Singh, Raj Bali, Bulk viscosity and decaying vacuum density in Friedmann universe, Int J Theor Phys, 51 3828 (2012).
Avelino A. Avelino et.al, Bulk Viscous Matter-dominated Universes: Asymptotic Properties,
JCAP, 1308 12 (2013).
|
http://arxiv.org/abs/2307.05580v1 | 20230710140343 | Homogeneous search for helium in the atmosphere of 11 gas giant exoplanets with SPIRou | [
"R. Allart",
"P. -B. Lemée-Joliecoeur",
"A. Y. Jaziri",
"D. Lafrenière",
"E. Artigau",
"N. Cook",
"A. Darveau-Bernier",
"L. Dang",
"C. Cadieux",
"A. Boucher",
"V. Bourrier",
"E. K. Deibert",
"S. Pelletier",
"M. Radica",
"B. Benneke",
"A. Carmona",
"R. Cloutier",
"N. B. Cowan",
"X. Delfosse",
"J. -F. Donati",
"R. Doyon",
"P. Figueira",
"T. Forveille",
"P. Fouqué",
"E. Gaidos",
"P. -G. Gu",
"G. Hébrard",
"F. Kiefer",
"Á Kóspál",
"R. Jayawardhana",
"E. Martioli",
"L. A. Dos Santos",
"H. Shang J. D. Turner",
"A. Vidotto"
] | astro-ph.EP | [
"astro-ph.EP"
] |
1 Département de Physique, Institut Trottier de Recherche sur les Exoplanètes, Université de Montréal, Montréal, Québec, H3T 1J4, Canada
2 Observatoire astronomique de l'Université de Genève, Université de Genève, chemin Pegasi 51, CH-1290 Versoix, Switzerland
3 Gemini Observatory, NSF's NOIRLab, Casilla 603, La Serena, Chile
4 Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France
5 Dept. of Physics & Astronomy, McMaster University, 1280 Main St West, Hamilton, ON, L8S 4L8, Canada
6 Department of Physics, McGill University, 3600 rue University, Montréal, QC, H3A 2T8, Canada
7 Department of Earth & Planetary Sciences, McGill University, 3450 rue University, Montréal, QC, H3A 0E8, Canada
8 Institut de Recherche en Astrophysique et Planétologie, Université de Toulouse, CNRS, 14 avenue Edouard Belin, F-31400, Toulouse, France
9 Department of Earth Sciences, University of Hawaií at Manoa, Honolulu, HI 96822 USA
10 Institute of Astronomy and Astrophysics, Academia Sinica, Taipei 10617, Taiwan
11 Institut d'Astrophysique de Paris, CNRS, UMR 7095, Sorbonne Université, 98 bis bd Arago, 75014 Paris, France
12 Observatoire de Haute Provence, St Michel l’Observatoire, France
13 LESIA, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, Université Paris Cité, 5 place Jules Janssen, 92195 Meudon, France
14 Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, Eötvös Loránd Research Network (ELKH), Konkoly-Thege Miklós út 15-17, 1121 Budapest, Hungary
15 CSFK, MTA Centre of Excellence, Konkoly-Thege Miklós út 15-17, 1121 Budapest, Hungary
16 ELTE Eötvös Loránd University, Institute of Physics, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary
17 Max Planck Institute for Astronomy, Königstuhl 17, 69117 Heidelberg, Germany
18 Department of Astronomy, Cornell University, Ithaca, NY 14853, U.S.A.
19 Laboratório Nacional de Astrofísica, Rua Estados Unidos 154, Itajubá, MG 37504364, Brazil
20 Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
21 Institute of Astronomy and Astrophysics, Academia Sinica, Taipei 10617, Taiwan
22 Department of Astronomy and Carl Sagan Institute, Cornell University, 122 Sciences Drive, Ithaca, NY 14853, USA
23 Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, The Netherlands
[email protected]
The metastable helium triplet in the near-infrared (10833 Å) is among the most important probes of exoplanet atmospheres. It can trace their extended outer layers and constrain mass-loss. We use the near-infrared high-resolution spectropolarimeter SPIRou on the CFHT to search for the spectrally resolved helium triplet in the atmospheres of eleven exoplanets, ranging from warm mini-Neptunes to hot Jupiters and orbiting G, K and M dwarfs. Observations were obtained as part of the SPIRou Legacy Survey and complementary open-time programs. We apply a homogeneous data reduction to all datasets and set constraints on the presence of metastable helium, despite the presence of systematics in the data. We confirm published detections for HAT-P-11 b, HD 189733 b, and WASP-69 b and set upper limits for the other planets. We apply the open source code to set upper limits on the mass-loss rate for the non-detections and to constrain the thermosphere temperature, mass-loss rate, line-of-sight velocity, and the altitude of the thermosphere for the detections. We confirm that the presence of metastable helium correlates with the stellar mass and the XUV flux received by the planets. We investigated the correlation between the mass-loss rate and the presence of metastable helium, but it remains difficult to draw definitive conclusions. Finally, some of our results are in contradiction with previous results in the literature, therefore we stress the importance of repeatable, homogeneous, and larger-scale analyses of the helium triplet to obtain robust statistics, study temporal variability, and better understand how the helium triplet can be used to explore the evolution of exoplanets.
SPIRou helium survey
R. Allart, P.-B. Lemée-Joliecoeur, Y. Jaziri et al.
Homogeneous search for helium in the atmosphere of 11 gas giant exoplanets with SPIRou
R. Allart1,*,Trottier Postdoctoral Fellow ,
P.-B. Lemée-Joliecoeur1,
A. Y. Jaziri2,
D. Lafrenière1,
E. Artigau1,
N. Cook1,
A. Darveau-Bernier1,
L. Dang1,Banting Postdoctoral Fellow
C. Cadieux1,
A. Boucher1,
V. Bourrier2,
E. K. Deibert3,
S. Pelletier1,
M. Radica1,
B. Benneke1,
A. Carmona4,
R. Cloutier5,
N. B. Cowan6,7,
X. Delfosse4,
J.-F. Donati8,
R. Doyon1,
P. Figueira2,
T. Forveille4,
P. Fouqué8,
E. Gaidos9,
P.-G. Gu10,
G. Hébrard11,12,
F. Kiefer13,
Á Kóspál14,15,16,17,
R. Jayawardhana18,
E. Martioli19,11,
L. A. Dos Santos20,
H. Shang21,
J. D. Turner22NHFP Sagan Fellow, and
A. Vidotto23
Received January 1, 2015; accepted January 1, 2015
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Through their lifetime, exoplanets might undergo several physical processes that can alter their compositions, masses and sizes. Being the outer envelope of exoplanets, atmospheres are excellent windows onto the exoplanets and particularly subject to their evolution (be it impinging radiation, mass loss, etc). The atmospheres of close-in gas giant planets hydrodynamically expand under the absorption of stellar irradiation <cit.> and, in extreme conditions, they can evaporate and be stripped away from the planet core <cit.>. Such atmospheric changes could happen in the early stage of the system (100 Myr-1 Gyr, <cit.>), in particular, once the gaseous protoplanetary disk has been dissipated and the planet is directly irradiated by the central star for planets either formed in-situ <cit.> or during their disk-driven inward migrations <cit.>. Under such intense irradiation, Neptune-sized planets could be unable to retain their gaseous envelopes and could even become bare cores due to their lower initial mass. This is consistent with the observed lack of close-in hot Neptunes (Fig. <ref>), commonly called the Neptunian or the evaporation desert <cit.>. Another explanation for the lack of hot Neptunes can be found in high-eccentricity migration scenario <cit.>. This scenario can bring some Neptunes to large orbital distances, delay their migration, and therefore can protect them from evaporation. Alternatively, high-eccentricity migration can also bring some Neptunes closer to their host stars, and possibly disrupting them through stellar tides. Finally, the planets initially in the Neptune desert can migrate away through tidal and magnetic interactions with their host stars <cit.>. Hence, one way to test these theories requires measuring the planet's stability against photoevaporation through mass-loss rate measurements for a large exoplanet population to derive statistical conclusions.
The ongoing evaporation of exoplanetary atmospheres was first observed through the hydrogen Lyman-α line at UV wavelengths for several hot Jupiters <cit.> and warm Neptunes <cit.>. However, the interstellar medium (ISM), the geocoronal emission, and the lack of stellar continuum limits the use of the Ly-α line, only observable from space, to measure exoplanet mass-loss rate.
The near-infrared helium triplet, predicted earlier on by <cit.> (see also <cit.>), has recently been discovered <cit.> and is since used to study the upper layers of exoplanet atmosphere from the thermosphere to the exosphere. The exosphere is the outermost atmospheric layer of an exoplanet and is no more gravitationally bound to it. Ground-based near-infrared high-resolution spectrographs (e.g., CARMENES, GIANO, NIRPSEC, or SPIRou) has led to several unambiguous spectrally and temporally resolved detections <cit.>, highlighting the use of the helium triplet as a robust atmospheric tracer. In addition, low resolution observations have confirmed and detected helium signatures <cit.>. Most of these detections were obtained for planets orbiting K-dwarfs, which favor the presence of helium particles in their metastable state in exoplanet atmospheres due to their higher extreme-ultraviolet and lower mid-ultraviolet flux <cit.>. This is also in agreement with several non-detections for planets orbiting around stars of other spectral types <cit.>.
In addition to being a powerful atmospheric tracer, the helium triplet is weakly (or not) affected by the ISM absorption <cit.>, the Rossiter-McLaughlin effect, the center-to-limb variation or by stellar activity <cit.>. Therefore, measuring these transitions has yield estimates of the mass-loss rate of tens of exoplanets orbiting K dwarfs. However, the disparateness in the instruments, data reduction pipelines, transmission spectrum extractions, data reproducibility, and modeling framework have prevented homogeneous analyses thus far. Although, homogenous analyses have been led on a handful of exoplanets <cit.>, we provide the largest homogeneous analysis of the helium triplet in the atmosphere of eleven exoplanets observed with SPIRou.
We describe the instrument and the observations in section <ref>, then detail the methods used in section <ref>. Section <ref> presents the helium analysis for each planet, while section <ref> discuss the general trends that can be drawn from this sample. We conclude in section <ref>.
§ OBSERVATIONS
SPIRou (SPectromètre InfraROUge, ) is a fiber-fed near-infrared (0.98-2.51 μm) echelle spectro-polarimeter installed on the 3.6 m Canada France Hawaii Telescope in Maunakea (CFHT). It has a high spectral resolution of 70 000 with 1.9 pixels per resolution element and a pixel sampling of 2.3 km·s^-1. SPIRou is fed by three fibers: fibers A and B for science with orthogonal polarization and fiber C for reference. SPIRou was already used for atmospheric studies <cit.>, which reported the presence of non white noise instrumental systematics that might be associated to modal noise for example <cit.>. Data reduction and analysis are described in section <ref>.
Transit datasets of 13 exoplanets have been collected with SPIRou as part of the SPIRou Legacy Survey (SLS, PI: Donati) and various programs obtained through Canadian open time or collaborations (namely AU Mic b, GJ1214 b, GJ3470 b, HAT-P-11 b, HD1897333 b, K2-25 b, TOI-1728 b WASP-11 b, WASP-39 b, WASP-52 b, WASP-69 b, WASP-80 b, and WASP-127 b). The polarimetry mode was used for the datasets collected on 2019-06-17 for AU Mic b, 2019-02-18 for GJ 3470 b, and 2020-07-03, 2020-07-05, 2020-07-25, and 2021-08-24 for HD 189733 b. However, we used the extracted data in the AB mode such that we do not differentiate between polarization (see section <ref> and <cit.>). We discard K2-25b (19BP40, PI: Donati) due to the very low signal-to-noise ratio (S/N) obtained for each exposure and TOI-1728b (21BC14, PI: Allart) due to a mismatch of the transit window. Figure <ref> highlights the targets observed with SPIRou in a planetary mass-irradiation diagram. The sample is mainly composed of hot to warm Jupiters (7) with some warm Neptunes (2) and mini-Neptunes (2) spanning a broad range of stellar ages and orbiting mainly K and M-type stars. We summarize the observational conditions (i.e. S/N, seeing, and airmass) during planetary transits in table <ref>. We note that due to CFHT scheduling constraints for SPIRou and different program strategies, it was not possible to gather more than one transit for some targets, limiting the reproducibility of the results.
§ METHODS
§.§ Data reduction, spectral extraction and telluric correction
All data were reduced using A PipelinE to Reduce Observations (; version 0.7.179; ), the standard SPIRou data reduction software. performs all calibrations and pre-processing to remove detector effects including dark, bad pixel, background, and performs detector non-linearity corrections <cit.>, localization of the orders, geometric changes in the image plane, correction of the flat and blaze, hot pixel and cosmic ray correction, wavelength calibration (using both a hollow-cathode UNe lamp and the Fabry-Pérot étalon; ), and removal of diffuse light from the reference fiber leaking into the science channels (when a Fabry-Pérot is used simultaneously in the reference fiber). This is done using a combination of daily calibrations and reference calibrations. The result is an optimally extracted spectrum of dimensions 4088 pixels (4096 minus 8 reference pixels) with 49 orders, referred to as extracted 2D spectra; . While the are produced for the two science fibers (A and B) and the combined flux in the science fibers (AB), we only used the AB extraction as this is the relevant data product for non-polarimetric observations.
also provides telluric-corrected versions of the spectra (Artigau et al. in prep) from both the absorption and emission of the Earth's atmosphere. The telluric absorption line correction done in is a two-step process and is briefly outlined here. First, the extracted spectra of both science targets and a large set of rapidly rotating hot stars are fitted with an Earth's transmittance model from TAPAS <cit.> that leaves percent-level residuals. Then, from the ensemble of hot star observations, derives a correction model for the residuals with three components for each pixel (optical depths for water, non-water absorbing components, and a constant). This residual model is adjusted to each science observation according to the optical depth of each component from the TAPAS fit. The resulting correction leaves residuals at the level of the PCA-based method of <cit.>, but has the advantage of simplicity and that any spurious point in the data will result in a local error rather than affecting the transmission globally as for a PCA analysis. Finally, a reconstruction of the telluric spectrum is derived using the fitted TAPAS template and the residuals model, for each observed spectrum. The pipeline performs the telluric absorption correction for lines with a transmission down to ∼10% (i.e., with relative depths of 90% with respect to the continuum), with deeper lines being masked out[In the spectral order of interest, there are no such deep lines.]. SPIRou does not include a simultaneous sky fiber, so the sky emission correction is done through a high-SNR sky library. A large library of sky spectra has been obtained through the life of the instrument and, from it, a PCA decomposition has been performed. The first 9 components of the sky spectra are fitted to the science data. To avoid subtracting the continuum of stars, this is done through a fit of the derivative of the flux rather than the flux. As sky emission lines are very narrow and have no correspondence in the stellar spectrum, this derivative-fitting is not biased systematically. However, the OH emission lines surrounding the metastable helium triplet are not well modeled and, therefore, not well corrected (residuals up to a few %) in comparison to the other sky emission lines in the SPIRou spectral range. As described in <cit.> and <cit.>, the OH lines around the He triplet are composed of two doublets: 10 834.3338 Å ([5-2]Q1e) with 10 832.412 Å ([5-2]Q2e) and 10834.241 Å ([5-2]Q1f) with 10 832.103 Å ([5-2]Q2f). The Q1-branch transition lines are the strongest and cannot be distinguished from each other. We, therefore, model these lines as three Gaussians with fixed positions, an amplitude ratio between the Q1 and Q2 lines of 0.0482, a free amplitude and FWHM for the strongest unresolved Q1 lines, and fixed FWHM for the Q2 lines at the element resolution. This model is fitted to the reconstructed sky emission spectra of and then a linear minimization of this best fit with a stellar template is applied for each stellar spectrum. The telluric correction can be seen in Fig. <ref>.
§.§ Data analysis
We follow standard data analysis procedures for studies of exoplanet atmospheres at high resolution as described in detail for example in <cit.>, <cit.>, <cit.> or <cit.>.
Once the stellar spectra are extracted and telluric corrected, we focus our analysis on echelle order 71 (10639-10976 Å), where the helium triplet falls at the top of the blaze function. The spectra are moved to the stellar rest frame using the systemic velocity measured by the line-by-line (LBL, ) code, normalized using the median flux in two bands (10 823-10 826 Å and 10 839-10 842 Å), and remaining outliers (e.g., cosmic rays) are sigma replaced following <cit.>. Spectra obtained before and after transit, hereafter called out-of-transit spectra, are averaged to create a reference master-out spectrum. Figure <ref> displays the master out for each night of each star before and after applying the telluric correction.
§.§.§ Transmission spectrum
To remove stellar features and obtain a transmission spectroscopy map, we divide each spectrum of the time series by the master out. Figure <ref> displays the transmission spectroscopy map for each night of each planet in the stellar rest fame. They are then Doppler-shifted to the planet rest frame based on the parameters in tables <ref> and <ref>. Figure <ref> shows the transmission spectroscopy map in the planet rest frame for each planet, but averaged over the multiple transits. Partial transits will contribute only to the phases where data were collected. The 1-dimensional transmission spectrum is computed for each night as the average of the transmission spectroscopy map weighted by a modelled white light curve <cit.>. The <cit.> package was used to model the white light curve with the parameters from tables <ref> and <ref>, where the quadratic linear limb darkening coefficient have been estimated in the J-band with the <cit.> code based on the tables of <cit.>. This scaling is necessary to properly take into account the true contribution of the ingress and egress spectra into the transit average transmission spectrum. Finally, and for a given planet, the transmission spectrum of each night are weight averaged by their uncertainties to build the average transmission spectrum. Figure <ref> displays the average transmission spectrum around the helium triplet for each planet.
We neglect the impact of the Rossiter McLaughlin effect <cit.> and the center-to-limb variation <cit.> as it was shown to have little impact on the helium lines in several studies <cit.>, which include part of the targets studied here.
§.§.§ Light curve
We derive the helium light curve to study the temporal variability of the signal by measuring the excess absorption, assuming a symmetric signal, in a passband of 0.75 Å centered at 10833.22 Å for each exposure of the transmission spectroscopy map in the planet rest frame. Figure <ref> displays the measured helium light curve for each planet averaged over the multiple transits.
§.§.§ Detection significance
The excess absorption is estimated on the transmission spectrum for each transit and the average transmission spectrum by measuring the average signal on a passband of 0.75 Å centered at 10 833.22 Å. To assess the uncertainty on the measured excess absorption we produced Allan plots (Fig. <ref>) to estimate the contribution of red-noise to the data. We applied the technique described in <cit.> but instead of having a time-correlated noise source we have a spectrally-correlated noise source. We first estimate the expected Allan curve if our transmission spectrum is solely affected by white noise. We compute the standard deviation of the transmission spectrum excluding the helium triplet (10820-10830 and 10836-10845 Å) and scale it for decreasing spectral resolution as √(n), where n is the number of pixels in the bin. We then binned the transmission spectrum by n, compute the root mean square (rms) and repeat the process for a different n size. We then fit the rms in a log-log space to derive the general trend of the noise properties. We then scale our white noise value on the 0.75 Å bandpass to match the fitted rms. This technique provides a more rigorous estimation of the noise present in the data. We set the detection level as the measured excess absorption divided by the inflated 1-σ uncertainty, following the aforementioned noise estimation. In the case of non-detections (below 5-σ), we report three times the 1-σ uncertainty to set the 3-σ upper limit. From there, we derive the equivalent opaque radius and the widely used δR_p/H parameter. The latter corresponds to the number of scale heights probed by the equivalent opaque radius. Table <ref> summarizes these parameters for each planet.
§.§.§ Bootstrap analysis
The last test that is performed to confirm the planetary origin of the helium absorption is a bootstrap analysis, also called Empirical Monte Carlo (EMC) simulations <cit.>. It consists of generating three transmission spectrum scenarios (out/out, in/in, and in/out) of 10000 iterations each. The goal is to in produce fake time series' to estimate how likely the measured signal can be built by random noise. For each iteration, the in- and out-of-transit spectra are randomized among the pool of spectra considered in each scenario. Then, the transmission spectrum is built for each of these iterations and the excess absorption is measured as described in <ref>. The in/in and out/out distributions are expected to be centered at zero absorption while the in/out scenario should be close to the excess absorption measured. The results are shown in Fig. <ref>.
§.§ Modelling
§.§.§ Stellar pseudo-signal
The typical shape of the helium profiles is too broad to spectrally differentiate between a planetary and stellar origin, ; the planet and star signatures overlap for most orbirtal configurations, exception made if the planet is on an eccentric orbit, such as HAT-P-11b <cit.>. Moreover, it is possible that the planet transits in front of an inhomogeneous stellar surface that can create pseudo signal either in absorption or emission and can partly contribute to the observed He signal <cit.>. We can consider the stellar disk as two distinct regions with bright and dark stellar patches. The helium absorption line is only produced in the dark region and the planet only transits one of those two regions. If the planet only transits the bright region, the pseudo-absorption signal is maximized. Conversely, if the planet only transits the dark region, the pseudo-emission signal is maximized. To better visualize this effect, we developed a simple toy model that describes these extreme cases and consists of two stellar spectra representing the bright (F_B) and dark (F_D) regions associated with the fraction of dark regions (f, also called filling factor). The observed normalized master out-of-transit spectrum (F_out, norm) can be expressed as
F_out, norm= (1-f)· F_B +f· F_D .
F_B and F_D are fitted from 10827 to 10837 Å to the Sii line at 10 830.054 Å and to the Hei lines of the F_out, norm. Following the prescription of <cit.> and used in <cit.>, two superposed Voigt profiles are fitted to the Sii at the same fixed wavelength, and two Gaussians are fitted to the Hei lines with fixed wavelengths. We fixed the Sii line profile to be similar between F_B and F_D and we consider that the amplitude of the Hei lines between F_B and F_D are proportional to a constant α. The filling factor is estimated using the relation of <cit.> (Fig. 11) with the equivalent width (EW) of the Hei lines at 10 833 Å
Assuming the planet only transits the bright region, the pseudo-signal can be written as
F_in, normF_out, norm= 11-(R_p/R_∗)^2(1-(R_p/R_⋆)^2)· F_B +f· (F_D-F_B)(1-f)· F_B +f· F_D,
which is equivalent to equation 11 of <cit.> in the case of ground-based high-resolution normalized spectra.
We explored the impact of α and f on the stellar spectrum and the strength of the pseudo-signal for all our targets. The stellar spectra are well reproduced except for low values of α and f, i.e. when the stellar helium absorption comes only from a small dark region. The maximum pseudo-signal is produced when all the stellar helium absorption comes from the dark region (α=0) independently of the filling factor value selected between ∼0.4 and 1.
§.§.§ modeling
The code <cit.> is used to calculate the thermospheric structure of the 11 targets and the resulting neutral helium triplet signature. This 1D model is largely based on the formulations of <cit.> and <cit.> and we assumed an atmospheric composition of 90 % H and 10 % He. The density and velocity profiles of the atmosphere are calculated according to the Parker wind approximation assuming an isothermal planetary outflow <cit.>. To do so, we input the X-EUV spectral energy distribution (over 0-1170 Å, <cit.>) of the 11 targets used to calculate the photoionization of H and He, which we calculate in a mutually consistent manner using the formula from <cit.> that predicts the EUV luminosity. This depends on the total X-EUV flux received by the planet which is calculated thanks to the stellar age and the formula from <cit.>. The code calculates the density profiles of hydrogen in its neutral and ionized states, and of helium in its neutral, excited, and singly ionized states. The excited helium level corresponds to the metastable transition at 10 830 Å, which is the signature of interest for which the code calculates theoretical absorption spectra. The absorption signature is compared to the observation in order to estimate the characteristics of the upper atmosphere, such as temperature and mass-loss rate. However, it remains an approximate characterization by considering ideal theoretical spectra calculated at mid transit without taking into account geometrical effects and inhomogeneities of the stellar surface, while it is compared to the observed mean transmission spectra.
We explore the input parameter space of the models for each of the 11 planets, varying the isothermal temperature profile, T, and the total atmospheric escape rate, ṁ, while having a fixed line-of-sight bulk velocity, v, and radius value at the top of the model, r. The line-of-sight bulk velocity corresponds to an average helium particle motion due to winds in the probe area around the terminator. For the three planets with detected helium lines, we further explore these last two parameters, v and r. Previous studies using the p-wind code or similar codes <cit.> seem to not have explored the role of the upper radius boundary used to calculate the thermospheric structure. Yet, this radius is critical for the calculation of the theoretical helium signature. Increasing the radius until the neutral triplet helium density no longer contributes significantly to the absorption signal is rarely consistent with the validity of the model beyond the Roche lobe. Above the Roche lobe, species are no longer gravitationally bound to the planet, which limits the validity of the model. We, therefore, decided to fit the model top radius for targets with neutral triplet helium detection but set an upper limit at the Roche lobe. This arbitrarily limit restricts the amount of neutral triplet helium in the atmosphere and impacts the relative depth between the two neutral triplet helium absorption lines due to the relative radius ratio or the altitude of the optical thickness of the atmosphere. Varying the model top radius cannot be done for non-detection, as it allows finding a model compatible with the data for any temperature and mass-loss rate by reducing the radius to decrease the absorption. We thus set the radius to the Roche lobe to constrain the maximum escape rate for non-detections.
We used χ^2 minimization to identify the best-fitting models and their uncertainties. As we found best fits to yield reduced χ^2 larger than unity, likely because of systematic noise in the data, we chose the conservative approach of scaling the error bars of the data by the square root of the reduced χ^2 from the best fit <cit.>.
For planets with detected signals, we provide uncertainties on the best-fit properties at 1-σ, while for non-detection we provide upper limits at 3-σ.
We limit the parameter space in temperature using the model <cit.> (see also <cit.>) as a function of the gravitational potential of the planet. Below log(-Φ_G) = log GM_pl/R_pl = 13.0 in erg·g^-1 they predict temperatures below 10 000 K, while above this limit they predict temperatures below 20 000 K. We limit the parameter space in mass-loss using the maximum mass-loss efficiency for a photoionization-driven isothermal Parker wind <cit.>.
§ SPIROU SURVEY
Tables <ref> and <ref> summarize the stellar and planetary parameters used for the eleven systems that we observed. In the following subsections, for each exoplanet, we provide a short background history before describing the analysis of the helium triplet and then present our modeling of the transmission spectra.
§.§ AU Mic b
§.§.§ Background
AU Mic b is the inner planet of a two Neptune-sized system orbiting a young M dwarf discovered with TESS and monitored by several radial velocity (RV) spectrographs <cit.>. With an age of 22±3 Myr, this system still hosts an edge-on debris disc <cit.> and has an intense magnetic activity cycle. It was shown <cit.> that the planet b has an aligned orbit, and thus, might have formed and migrated within the disc. Therefore, it is thought that AU Mic b, and c are progenitors of the super-Earth/mini-Neptune population and key targets for in-depth characterization. Such a young planetary system is of particular importance to understand how planets and their atmospheres evolve. No detections of atomic or molecular species have been reported in the literature, but attempts in the visible have been made with ESPRESSO <cit.>. In addition, <cit.> used IRD and NIRSPEC data to study the presence of metastable helium and set an upper limit on the equivalent width of 3.7 mÅ at 99 %. The lack of atmospheric detections observed could be linked to the stellar wind confining the planet's atmosphere outflow <cit.>.
§.§.§ Helium triplet
The only transit of AU Mic b observed with SPIRou has no baseline before transit and is missing data until after ingress, due to high airmass during constraints. The telluric lines are redshifted from the helium triplet (Fig. <ref>). A variable excess absorption feature is visible at the position of the helium triplet, but the width and intensity evolve across the transit with a maximum of absorption before mid-transit (Fig. <ref>, <ref> and <ref>). The measured excess absorption on the transmission spectrum is 0.37 ± 0.09 % (4.3 σ) assuming the noise properties derived from the Allan plot (Fig. <ref>). Indeed, similar structures are visible at different wavelengths (Fig. <ref>). These structures are not caused by telluric contamination (see Fig. <ref>). However, it can be expected that young active stars have variable stellar features and it is, therefore, not possible to claim any robust detection of helium for AU Mic b with only one transit. We set the 3-σ upper limit on the presence of helium at a conservative <0.26 % following the procedure of section <ref>, which is in agreement with <cit.>. It is also possible that due to the high stellar activity of AU Mic, the master out spectrum is not representative enough of the stellar features over the transit duration. We, therefore, call for more observations of the system to confirm the signature and average out the stellar activity.
§.§ GJ 1214 b
§.§.§ Background
GJ 1214 b is a warm mini-Neptune orbiting a nearby M dwarf <cit.>. Its density is in good agreement with a water-rich composition and a hydrogen-helium envelope, which encouraged in-depth analysis of its atmosphere. However, <cit.> revealed a featureless near-infrared spectrum obtained with the Hubble Space Telescope (HST), even with exquisite precision. The authors ruled out numerous compositions and concluded that the lower atmosphere of GJ1214 b is dominated by clouds. Nonetheless, recent attempts have been performed to detect the thermosphere and exosphere of the planet (layers well above the cloud deck) through the helium triplet. <cit.>, <cit.> and <cit.> reported only upper limits on the presence of He, while <cit.> reported a tentative detection at 4.6σ. It is interesting to compare the two last results as they have been obtained at high resolution with Keck/NIRSPEC <cit.> and CARMENES <cit.>. The upper limit set with NIRSPEC is ∼0.13 % at the 90 % confidence interval obtained for one transit, while the detection obtained with CARMENES is of 2.1 ± 0.5 % and was also obtained for one transit. <cit.> proposed that the discrepancy might be caused by the telluric contamination of the nearby OH and H_2O lines and their poor correction. The authors scheduled their CARMENES transits to avoid such contamination, and showed that the H_2O line is superposed to the helium triplet in the NIRSPEC data. However, <cit.> reported an upper limit of ∼1.22 % at the 95 % confidence interval by observing one transit of GJ 1214 b with NIRSPEC at a time of the year where there is no telluric contamination. Their results are in clear contradiction with <cit.> and could be explained by instrumental or reduction systematics <cit.>, or by strong variability from the star or in the planet's atmosphere.
§.§.§ Helium triplet
The time series of GJ 1214 b observed with SPIRou spans the full transit with a baseline before and after. The telluric lines are shifted to the red from the helium triplet and do not overlap with the planetary track (Fig. <ref>). The transmission spectrum (Fig. <ref>) is impacted by some systematics and the helium light curve (Fig. <ref>) has some variability before and during transit, but the bootstrap analysis (Fig. <ref>) reveals similar distribution with no significant excess absorption for any of the three scenarios. The measured excess absorption on the transmission spectrum is of 1.59 ± 0.97 %, and the 3-σ upper limit on the presence of helium of <2.92 %, which is not constraining enough to settle the difference between the detection of <cit.> and the non-detections of <cit.> and <cit.>. Despite having similar instrument and telescope, our results are less sensitive than <cit.> but are due to the lower S/N of the SPIRou data.
§.§ GJ3470 b
§.§.§ Background
GJ3470 b is a warm Neptune orbiting a nearby M dwarf <cit.> on an eccentric polar orbit <cit.>. <cit.> revealed a low-metallicity, hydrogen-dominated atmosphere with the detection of water, but depleted in methane. One possibility proposed by the authors is the presence of an unknown planet that could have caused tidal heating and pushed the atmosphere to be CO-dominated. The eccentric polar orbit of GJ3470 b could be an additional consequence of an unknown companion at long period <cit.>. In addition, <cit.> revealed through the detection of neutral hydrogen that the upper atmosphere extends beyond the Roche lobe, is elongated in the direction of the planet's motion, and strongly escapes into space. This could indicate that GJ3470 b may have lost 4 to 35 % of its current mass over its lifetime (∼2Gyr). Metastable helium has also been detected in the upper atmosphere of this planet <cit.>. The latter reported detection of 1.5±0.3 % maximum excess absorption with CARMENES and derived a mass-loss rate of the same magnitude as <cit.>.
§.§.§ Helium triplet
The two time-series observed with SPIRou span the full transit with baselines before and after transit. Telluric lines of OH and water overlap with the helium triplet for the first night but are redshifted for the second night (Fig. <ref>). In addition, the second time series was observed under better weather conditions. The transmission spectra and helium light curves of both nights are in very good agreement with each other. From the transmission spectroscopic map (Fig. <ref>), we can see an overall increase of excess absorption on a broad wavelength range from after ingress until the end of the time series, independently of the transit. This effect is visible in the helium light curve (Fig. <ref>). However, the averaged transmission spectrum (Fig. <ref>) does not have significant excess absorption, which is in agreement with the bootstrap analysis (Fig. <ref>) of the two time-series. The measured excess absorption on the transmission spectrum is 0.55 ± 0.21 % (2.6σ). We put the 3-σ upper limit on the presence of helium at <0.63 % as it is difficult to differentiate the observed broad feature between a planetary origin or noise structure. Our result is in disagreement with the detections reported by <cit.> and <cit.>, even once they are integrated over the same 0.75 Å bandpass (∼1.2 % for <cit.>). We also performed an injection-recovery test by adding to our transmission spectrum a Gaussian of amplitude 1.5 % and FWHM of 1 Å following the result of <cit.>. The measured excess absorption on this injected data is 1.62 ± 0.21% (7.7σ), which confirm the tension with the literature. More data are needed to mitigate non-white noise source and confirm or infirm the presence of metastable helium in the atmosphere of GJ3470 b.
§.§ HAT-P-11 b
§.§.§ Background
HAT-P-11 b is the inner planet, a warm Neptune <cit.>, in a two-planet system <cit.> around a K dwarf star on an eccentric misaligned orbit <cit.>, with properties similar to GJ3470 b. <cit.> and <cit.> reported the detection of water and methane in its lower atmosphere with high-altitude clouds and a low metallicity, which is in contradiction to the metallicity-mass trend known for the Solar system planets. It is even more striking that the star has a super-solar metallicity. A possible scenario is that metals stopped being accreted before the envelope formed <cit.>. This was further supported by <cit.> who reported a low metallicity atmosphere through a panchromatic UV approach. In addition, the authors measured the escape of neutral hydrogen and the presence of a cometary-like tail. <cit.> and <cit.> detected the presence of metastable helium at near-infrared wavelengths with CARMENES and HST. Due to the high resolution of CARMENES, <cit.> resolved the helium lines and measured an excess absorption of 1.08 ± 0.05 % on a 0.75 Å passband with some variability between their two transits (0.82 ± 0.09 % and 1.21 ± 0.06 %). They also constrained the presence of helium to the thermosphere at high temperatures (or low mean molecular weight) with the presence of strong day-to-night side winds and without a strong mass-loss rate.
§.§.§ Helium triplet
The two transits of HAT-P-11 b are well observed with a baseline before and after transit. Due to the high systemic velocity of the system, there is no overlap with telluric lines (Fig. <ref>). A clear repeatable signature is visible during the transit and is slightly blue-shifted from the expected position of the helium triplet in the planetary rest frame, which cannot be mistaken with the stellar rest frame due to the planet eccentricity (Fig. <ref>, <ref> and <ref>). The helium light curve does not significantly extend beyond the transit duration in agreement with <cit.>. In addition, the two transits show similar helium line shapes and light curve with no significant temporal variation. The measured excess absorption on the transmission spectrum is 0.76 ± 0.07 % (11 σ) with a maximum of excess absorption at ∼1.3 %. The excess absorption is significantly below the reported average excess absorption measured by <cit.>, but in agreement with the value reported for their first transit of 0.82 ± 0.09 %.
§.§ HD189733 b
§.§.§ Background
HD189733 b is a hot Jupiter orbiting a relatively active K dwarf <cit.>. Due to its host star brightness, it is one of the most studied exoplanets from its lower atmosphere to its exosphere. Multiple detections of molecules (such as H_2O and CO) have been reported both at low- and high-resolution <cit.>. Detection of atomic species probing the higher atmospheric layers have also been reported, including Na <cit.>, K <cit.>, H (through H-α, <cit.> and Lyman-α, <cit.> or He <cit.>. The helium triplet has been observed from 2016 to 2020 with three different high-resolution spectrographs (CARMENES, GIANO, and Keck/NIRSPEC) for a total of 9 transits. The three studies all report a compact metastable helium atmosphere probing similar atmospheric layers (∼1.2 R_P) and dynamic (blueshift of ∼3-4 km·s^-1) than the Sodium doublet. However, it was put in evidence in <cit.> that the excess absorption varies between epochs and instruments: 0.617±0.017 % for CARMENES <cit.>, 0.508±0.015 % for GIANO <cit.> and 0.420±0.013 % for NIRPSEC <cit.>. These variations are unlikely due to starspot occultation but could be caused by instrumental systematics, unocculted stellar active regions, the planet's atmospheric outflow, shear instability, or stellar flares increasing the star's XUV flux <cit.>.
§.§.§ Helium triplet
A total of six transit time series' of HD 189733 were observed with SPIRou from 2018 to 2021. Two of them observed on 2020-07-13 (night 3) and 2020-07-05 (night 4) are partial transits with respectively only egress and before mid-transit spectra. The transit of 2021-08-24 (night 6) has no after-transit baseline. The remaining transits are well covered with before and after baseline. The telluric contamination is negligible for all nights as either the strong OH component is very shallow or is far away from the helium line. A clear excess absorption feature is detected (Fig. <ref>, <ref> and <ref>) during the transit of HD 189733 b at the expected position of the helium lines, but cannot be disentangled between the stellar and planetary rest frame. The signature is slightly blue-shifted in the planetary rest frame and the two components of the helium doublet are visible with a contrast ratio of ∼2. Despite some large variability in the helium light curve before the transit, the excess absorption is well contained to the transit duration. The measured excess absorption on the transmission spectrum is 0.69±0.04 % (17 σ) with a maximum at ∼0.9 %. We report in table <ref> the excess absorption measured for each night for the bandpass of 0.75 Å but also for a 40 km·s^-1 (1.44 Å) bandpass to allow comparison with the previous results of <cit.>. The measured excess absorption over this bandpass on the average transmission spectrum differs from the results of NIRSPEC at 3-σ, GIANO at 0.1-σ, and CARMENES at 3.6-σ. The variability in the signal strength is not due to reduction artifact, or Earth's atmosphere residuals as significant variations are measured for different transits obtained with the same instrument (GIANO and SPIRou) and the same data reduction. To further explore this variation in the signal strength, we compare in Fig. <ref> the transmission spectra obtained for the nights where the complete transit was observed (2018-09-22, 2019-06-15, 2020-07-25 and 2021-08-24). We note that for the transit of 2018-09-22 (blue), the weak component of the helium triplet has no excess absorption while the strong component has more excess absorption than the other nights. The transmission spectrum of 2019-06-15 (green) has less excess absorption in the main component and a clear lack of absorption between the two components. The transmission spectrum of 2020-07-25 (pink) has larger noise structures and there are no clear distinctions between the two components of the helium triplet. These variations of the helium line shape can have different origins such as instrumental systematics, the optical thickness of the outflow or the presence of strong blueshifted helium gas. It is beyond the scope of this paper to investigate the causes of these variations.
§.§ WASP-11 b
§.§.§ Background
WASP-11b, also known as HAT-P-10b, is a hot Jupiter orbiting an inactive K dwarf <cit.>. It has an aligned orbit, as is the case for many hot Jupiters <cit.>. No studies have been reported on its atmosphere.
§.§.§ Helium triplet
The time series of WASP-11 b observed with SPIRou spans the full transit with a baseline before and after. We removed the two first exposures due to high variability in the stellar spectrum. The telluric lines are redshifted relative to the helium triplet (Fig. <ref>). The transmission spectrum (Fig. <ref>) exhibits a slightly decreasing slope from 10 830 to 10 833 Å which is likely due to the Sii stellar line at 10830 Å. It also has some excess absorption features around 10 833.7Å (redward of the helium triplet), which seems to be associated with a few of exposures before mid-transit (Fig. <ref>). However, the helium light curve (Fig. <ref>) is stable across the time series and the bootstrap analysis (Fig. <ref>) reveals similar distribution with no excess absorption for the out-out, in-in, and in-out scenarios. The measured excess absorption on the transmission spectrum is -0.09 ± 0.52 %, consistent with no absorption. The 3-σ upper limit on the presence of helium is set at <1.56 % due to the observed systematics. At the difference of the our other datasets, the noise structures disappear and tend toward white noise for larger bin (Fig. <ref>).
§.§ WASP-39 b
§.§.§ Background
WASP-39 b is an inflated warm Neptune orbiting a late G-type star <cit.>. It is one of the archetype exoplanets for atmospheric characterization and comparison due to its cloud-free high metallicity atmosphere <cit.>. Detections of water, carbon monoxide, carbon dioxide, and hydrogen sulfide have been reported with HST, Spitzer, and the newly launched JWST <cit.>. However, no studies have been reported for its upper atmosphere.
§.§.§ Helium triplet
The two time series (2022-06-04 and 2022-06-08) respectively cover the full transit with baseline and the transit until the start of egress with baseline only before transit. The weakest components of the OH doublets overlap with the red wing of the stellar helium triplet but are well corrected (Fig. <ref>). Some features are visible in the transmission spectrum and the helium light curve (Figs. <ref> and <ref>) at different wavelengths and phases. We attribute these features to instrumental systematics rather than the presence of helium in the exoplanet atmosphere. To reinforce this point, the bootstrap analysis (Fig. <ref>) shows distributions with no excess absorption for the out-out and in-in scenarios while the in-out scenario mean value varies between the two nights from positive to negative excess absorption value, but is still compatible with no excess absorption. The measured excess absorption on the transmission spectrum is 0.47 ± 0.68 %, but the 3-σ upper limit on the presence of helium is set at <2.05 %.
§.§ WASP-52 b
§.§.§ Background
WASP-52 b is a hot Jupiter orbiting around an active K dwarf <cit.>. While detection of water and clouds in its lower atmosphere have been reported <cit.>, WASP-52 b was intensively more studied for its upper atmosphere. Detections of the sodium and potassium doublets and the H-α line in the visible with ESPRESSO <cit.> indicate an extended thermosphere above the cloud deck up to ∼1.2 R_p, still below the Roche lobe radius (1.75 R_p). These detections are a bit surprising with respect to the equilibrium temperature of ∼1200 K, but could be explained by hot upper layers due to the strong stellar XUV flux correlated to stellar activity. More recently, metastable helium was detected at high-resolution by <cit.> but only an upper limit was set at low resolution <cit.>. <cit.>, one of the strongest, excess absorption of 3.44±0.31 % with NIRSPEC, such that helium almost fill the Roche lobe. They, further, applied <cit.> to estimate that the planet loses 0.5% of its mass per Gyr.
§.§.§ Helium triplet
The two time-series cover the full transit with a baseline before and after transit. The strong component of the telluric OH line overlaps partially and completely with the helium triplet for the two nights (Fig. <ref>). We note that the telluric-corrected master-out of each night is not perfectly identical. This is likely due to the low SNR of the datasets. This also impacts the shape of the stellar helium line where it is shallower and broader for the first night. Nonetheless, the transmission spectra and helium light curves of both nights are in good agreement with each other. From the transmission spectroscopic map (Fig. <ref>), we can see that the before-transit and in-transit spectra have features in excess absorption across the spectral range. This is also captured by the helium light curve (Fig. <ref>). In the averaged transmission spectrum (Fig. <ref>), noise structures are also visible, but no significant excess absorption is detected at the helium line position and it is concurred by the bootstrap analysis (Fig. <ref>) of the two time-series. The measured excess absorption on the transmission spectrum is 1.36 ± 0.56 %, and the 3-σ upper limit on the presence of helium is set at <1.69 %. From the Allan plot (Fig. <ref>), the WASP-52b data best follow the white noise estimations even if features have large amplitudes up to 4 % they are below the 3-σ upper limit once integrated over the 0.75 Å passband. This is in strong disagreement with the results reported by <cit.> and <cit.>. We note that the value reported by <cit.> is the maximum absorption of their signature, but once integrated over a 0.75 % passband their absorption is ∼2.7 %, which is still in strong disagreement with our observations. We also performed an injection-recovery test by adding to our transmission spectrum a Gaussian of amplitude 3.44 % and FWHM of 1 Å following the result of <cit.>. The measured excess absorption on this injected data is 3.93 ± 0.56% (7σ), which confirm the tension with the literature. Here again, we need more data to settle the debate on the presence of metastable helium.
§.§ WASP-69 b
§.§.§ Background
WASP-69 b is a warm Neptune orbiting an active K dwarf <cit.>. It is one of the best targets for atmospheric characterization due to its large-scale height and was, therefore, well studied at high spectral resolution and combined with data at low resolution. <cit.> reported the presence in the lower atmosphere of five molecules at more than 3σ with the near-infrared high-resolution spectrograph GIANO, but with variability for some molecules between transits. Nonetheless, water was independently confirmed with HST <cit.> alongside aerosols. The detection of the sodium doublet in the thermosphere was also reported at high spectral resolution <cit.>, but with a strong amplitude ratio between the two lines, which is likely due to the presence of hazes. WASP-69 b is also one of the two first exoplanets (with HAT-P-11 b, ) with a measured excess absorption of helium obtained at high resolution with CARMENES <cit.>. The authors measured a blueshifted line profile with a clear excess absorption of 3.59±0.19% with a slight excess after the opaque transit. This is in agreement with an extended thermosphere up to 2.2 R_p. This signature was also confirmed at low resolution by <cit.>.
§.§.§ Helium triplet
The time series of WASP-69 b covers the full transit with a baseline before and after. The strong component of the OH lines overlaps with the redwing of the helium triplet (Fig. <ref>). A clear signature is visible during the transit slightly blue-shifted to the expected position of the helium triplet, but can still be associated with both the stellar and planetary rest frame (Fig. <ref>, <ref> and <ref>). From the helium light curve, it is not possible to confirm the presence of post-transit absorption as discussed in <cit.>. Moreover, we see that there is some variability along the transit duration with a maximum of excess absorption before mid-transit. The measured excess absorption on the transmission spectrum is 2.21 ± 0.46 % (4.8 σ) with a maximum excess absorption of ∼3.1 %. This is significantly below the reported maximum excess absorption measured by <cit.> but the integrated signal over a 0.75 Å bandpass is in agreement with an absorption of ∼2 % as their line profile is quite narrow. The difference at the maximum of excess absorption is too large to be explained by data reduction effects but could be caused by some instrumental or systematic effects as well as some astrophysical variability linked either to the star or the planet.
§.§ WASP-80 b
§.§.§ Background
WASP-80 b is a hot Jupiter orbiting a K7 dwarf <cit.>. Broadband absorption features of water and carbon dioxide partly muted by clouds and aerosols reveal an enhanced atmospheric metallicity <cit.>. No detection of metastable helium has been reported either at low resolution by <cit.> nor at high resolution by <cit.>. The latter set an upper limit of 0.7% was set with GIANO with 3 transits. The authors estimated that the helium-to-hydrogen abundance ratio of WASP-80 b has to be lower than solar to match their data.
§.§.§ Helium triplet
The time series of WASP-80 b observed with SPIRou spans the full transit with a baseline before and after. The telluric line positions overlap with the helium triplet with the strong component of the OH doublet on the bluewing and the water telluric line on the redwing (Fig. <ref>). The telluric (absorption and emission) lines seem to be well corrected and should not impact the potential presence of planetary helium. Systematics are present in the transmission spectrum and the helium light curve (Figs. <ref> and <ref>), but are not related to the presence of helium in the exoplanet atmosphere. The bootstrap analysis (Fig. <ref>) shows distributions with no excess absorption for the out-out, in-in, and in-out scenarios. The measured excess absorption on the transmission spectrum is 0.03 ± 0.41 %. The 3-σ upper limit on the presence of helium was set at <1.24 %, which is less stringent than the upper limit set by <cit.> even if scaled to 3 transits and with the same upper limit metric.
§.§ WASP-127 b
§.§.§ Background
WASP-127 b is a bloated hot Neptune on a misaligned circular orbit around an old (∼10 Gyr) G-type star <cit.>. With its large scale height, WASP-127 b is one of the most amenable planets for atmospheric characterization. <cit.> indeed revealed with HST and Spitzer a feature-rich atmosphere with the strongest amplitude known (∼800 ppm) for the water band at 1.4 μm. In addition, they constrained the presence of clouds, aerosols, and of carbon-bearing species without the possibility to distinguish between a CO-rich high C/O ratio atmosphere or a CO_2-rich low C/O ratio atmosphere. High-resolution observations with SPIRou <cit.> reported the detection of water and a possible hint of OH, but did not detect the presence of CO. By combining their data with the data of <cit.>, their model tends to favor the low C/O atmosphere. Although the presence of many species in the thermosphere could have been expected, only sodium was detected with ESPRESSO <cit.>, which extends over 7 scale heights only and strong upper limits were set for the potassium doublet and H-α. Similarly, <cit.> reported an upper limit of 0.87% on the presence of metastable helium with one transit with Gemini/Phoenix, which is probably due to the relatively mild high-energy environment around the star.
§.§.§ Helium triplet
Due to the long transit duration of WASP-127b, the three time-series do not cover the full transit and have a little baseline before or after. Only for the last time series (2021-05-03), there is an overlap between the strong component of the OH doublet with the redwing of the stellar helium position (Fig. <ref>). We note that the stellar helium line is broad and shallow. The transmission spectroscopy map (Fig. <ref> and <ref>) exhibits noise structures in the observer or stellar rest frame during the transit, which impacts the transmission spectrum (Fig. <ref>). They might be caused by instrumental systematics not caught by or caused by small telluric residuals from the weak component of the OH lines. The average transmission spectrum has a broadband slope that we detrend with a polynomial of order 2. The helium light curve (Fig.<ref>) does not show any excess absorption during transit, which is confirmed by the bootstrap analysis of the in-out scenario (Fig. <ref>). We note the unusual trimodal distribution of the out-out scenario for the first and last night (2020-03-11 and 2021-05-03), which is likely caused by the lack of out-of-transit spectra. The measured excess absorption on the transmission spectrum is 0.05 ± 0.16 %. We put a 3-σ upper limit on the presence of helium at <0.48 %, which improves the previous constraint set by <cit.>.
§ INTERPRETATION
§.§ Stellar pseudo-signal
During a transit, the planet occults different stellar regions where metastable helium can be present or not. This can imprint the transmission spectrum with an absorption or emission spectral feature mimicking planetary signals. We studied the impact of a stellar pseudo-signal with the model described in section <ref> for the following planets: AU Mic b, GJ 1214 b, GJ 3470 b, HAT-P-11 b, HD 189733 b, WASP-52 b, and WASP-69 b, which are the planets where a stellar pseudo-signal could play a role on the presence of helium or its variability. For all the systems, we explored the impact of the filling factor, f, with values between 0.2 and 1 on the strength of the stellar pseudo-absorption signal but no significant variations were measured. Table <ref> reports the maximum integrated pseudo-signal excess absorption for the different planets assuming the planets only transit bright regions and all the stellar helium absorption comes from dark regions (α=0). For AU Mic b, GJ 1214 b, GJ 3470 b, and HAT-P-11 b, the impact of stellar pseudo-signal is negligible due to their small R_P/R_⋆ and cannot explain the measured excess absorptions or their variability. However, we note that in the case of GJ 3470 b, a pseudo-emission signal could reduce the helium signature amplitude by ∼0.6 % if the planet was only transiting dark regions (f=0.75 based on <cit.> and a helium EW of ∼270 mÅ) during our observations, and could partly explain the observed discrepancy with <cit.>.
In the case of the larger planets, namely HD 189733 b, WASP-52 b, and WASP-69 b, the maximum pseudo-absorption signal from the star can contribute to the variability observed between different transits but cannot be the single cause of the He absorption seen and other processes are required.
We describe in the following two subsections the extreme case of HD 189733 b where the pseudo-signal can have a similar excess absorption than the observed one and the case of HAT-P-11b as the best target to study planetary variability.
§.§.§ HD 189733 b: impact of stellar variability
The measured helium EW is ∼295 mÅ which is very close to the value measured by <cit.>, and sets the filling factor value to 80% <cit.>. As it was discussed in <cit.> and <cit.>, the impact of a stellar pseudo-signal is significant for HD 189733 b (Fig. <ref>) due to its large R_P/R_⋆ with a value of ∼0.75%, which is equivalent to the measured excess absorption on the transmission spectrum. However, it is important to note that the line shape of the pseudo-signal does not match our measured helium line shape. On the latter, there is a clear additional blueshifted signal that cannot be reproduced by pseudo-signal. In addition, the strength of the pseudo-signal slightly overestimates the observed one and requires the production of helium in the bright region as well to decrease it. Therefore, it is not possible that all the detected signal is of stellar origin, such that a significant fraction must come from the planet's atmosphere. However, the observed variability between transits (and instruments) might come from stellar variability.
§.§.§ HAT-P-11b: the advantage of its eccentric orbit
Based on <cit.> and an EW of the stellar helium line of ∼240 mÅ, we estimate a filling factor of ∼0.7. As shown in Fig. <ref>, the modeled stellar pseudo signal contributes ∼0.02 % to the absorption measured at the positions of the helium lines in the planet rest frame. This is less than the 1σ uncertainty and is due to the eccentric orbit of HAT-P-11 b, which decorrelates the planetary track from the stellar one. This strengthens the planetary origin scenario for the slight variability of the helium triplet between transits. Interestingly, the impact of stellar pseudo-signal could explain the feature visible on the red wing of the helium triplet in <cit.> in absorption and here in emission depending on the occultation of bright or dark regions.
§.§ Atmospheric modeling
Figure <ref> shows the Δχ^2 of the atmospheric best fit model as a function of mass-loss rate and temperature of the thermosphere as derived from the models <cit.>. Regions of the parameter space in red are in agreement with the data, while models in blue cannot reproduce the observed transmission spectra. Some simulations did not converge properly at each altitudes due to numerical issues in related to high variation of the different contributions. It is consequently better to exclude them and they are represented as white regions. The hatched region of the parameter space is physically excluded based on the gravitational potential for temperature and on the energy limited for the mass-loss rate (see section <ref>).
For the non-detections, we are able to identify regions of the parameter space that are in disagreement with the data but with a clear correlation between temperature and mass-loss rate. However, these regions are split between models with low mass-loss rate and high mass-loss rate that both are within the data error bars. This contradiction can be explained by our choice of homogeneous analysis and fixed thermosphere radius to the Roche lobe. The metastable helium profiles for the models at high mass-loss rates have a large fraction of helium particles at altitudes higher than the Roche Lobe. Consequently, the quantity of metastable helium below this radius is sufficient to reproduce the non-detections observed. As described before, we do not consider these models as realistic and therefore used the 3-σ contour of the regions of the parameter space at a low mass-loss rate to set its upper limit (Table <ref>). However, it was not possible to derive upper limits for GJ 1214 b, and WASP-127 b as there is always one model at a given temperature that fit the data independently of the mass-loss rate. We note that the upper limit on the mass-loss rate of GJ 3470 b is similar to the one derived by <cit.> of 3·10^10 g· s^-1. However, we constrained less well the mass-loss rate of WASP-52 b compared to <cit.>.
For the detections, the thermospheric radius, where the simulation is stopped, was set as a free parameter with an upper limit at the Roche Lobe along a free line-of-sight bulk velocity. We allow the thermospheric radius to be below the Roche Lobe radius in case of a compact thermosphere probed by the helium triplet. The best-fit models are displayed in Fig. <ref> for the three detections: HAT-P-11 b, HD 189733 b, and WASP-69 b, while the Δχ^2 map of mass-loss rate and temperature is shown in Fig. <ref>.
The best-fit model for HAT-P-11 b is obtained for ṁ=0.67^+0.27_-0.24·10^11 g· s^-1, T=8726^+158_-557 K, v=-5.3±0.8km· s^-1 and r= 6.5^+0_-1.5 R_p. We confirm the blueshifted nature of the helium triplet reported by <cit.> (v∼-3km· s^-1), which is marginally at 2-3 σ. Our results can be compared to <cit.>, where the authors benchmarked the use of on the HAT-P-11 b data of <cit.> obtained with CARMENES. The only caveat is that we let the radius free, but it cannot be higher that the limit set at the Roche Lobe, indicative of an exospheric contribution. Nonetheless, we find a good agreement with <cit.> for both the temperature and the mass-loss rate. The comparison with the results of <cit.> is less straightforward as the authors used the 3D code EVE that simulates both the thermosphere and exosphere, which is more complex than a Parker wind model. For example, we derive a lower temperature but a higher mass-loss rate. This shows that the derivation of the physical parameters of the thermosphere highly depends on the models used and their assumptions.
The best-fit model for HD 189733 b is obtained for ṁ=0.94^+0.82_-0.60·10^11 g· s^-1, T=16690^+1966_-2182 K, v=-4.2±0.8km· s^-1 and r= 1.41^+0.20_-0.03 R_p. Due to limitations of the code[Some differential equations cannot be solved depending of the initial parameters.], it was not possible to compute models for temperatures lower than 11 320 K, which reduced the explored parameter space. The measured blueshift is in agreement at 1-σ with the values of <cit.> and <cit.>. We confirm that the region of HD 189733 b atmosphere with helium in the triplet state is hot, compact, and sizes to only ∼0.2 R_p <cit.>. The derived mass-loss rate is similar to <cit.> but they assumed an almost fully ionized atmosphere with a very low mean molecular weight of H/He=99.2/0.8, which was necessary to fit their data.
The best-fit model for WASP-69 b is obtained for ṁ=0.40^+0.58_-0.25·10^11 g· s^-1, T=6987^+1617_-1604 K, v=-5.4±1.2km· s^-1 and r= 2.9^+0_-0.9 R_p. We confirm the blueshifted nature of the helium triplet reported by <cit.> (v=-3.58±0.23km· s^-1) compatible at 2-σ and higher velocity. The derived mass-loss rate is also consistent with 3D hydrodynamics and self-consistent photochemistry models of <cit.>. The best-fit model uncertainty on the radius indicates that a radius larger than the Roche Lobe could be preferred but it requires the use of more complex models to describe the exosphere <cit.>.
§ DISCUSSION
From the upper limits and detections measured in section <ref>, we estimated the equivalent opaque radius, which can be normalized by the planet scale height to produce the quantity δR_p/H proposed in <cit.>. This quantity expresses the number of scale heights that is probed by the helium triplet in the exoplanet atmosphere. We assumed here the equilibrium temperature and a mean molecular weight of 1.3 to estimate the scale height. We explored how δR_p/H correlates with various system parameters: stellar mass, stellar radius, effective temperature, age, planetary mass, planetary radius, planetary density, and equilibrium temperature. We also extended the search for possible correlations with the stellar XUV flux scaled to the semi-major axis of the planets measured between 5 and 504 Å which is the part of the XUV flux responsible for the creation of metastable helium in exoplanet atmospheres <cit.>. However, these flux values are model-dependent and are subject to various assumptions such as the age of the system (see section <ref>). We limited the search for correlation to the sample studied here because the data were obtained with the same instrument and reduced in a homogeneous way including the report of detections and upper limits, which differs in the literature from one paper to another. We note that a trend is noticeable with stellar age (not shown), but due to the lack of precision on the stellar age and the small number of targets in our sample, it is not possible to draw more conclusions.
The top panels of Fig. <ref> show the correlations for δR_p/H with the stellar mass and the XUV flux, the two parameters showing the strongest trends. We note that the upper limit set for WASP-11 b is not constraining enough to be useful. However, more observations of this planet might reveal the presence of metastable helium as the system is quite similar to the reported detections. The correlation with the stellar mass is well identified where the presence of metastable helium around exoplanet is favored for planets orbiting stars with masses between ∼0.6 and ∼0.85 M_⊙, which corresponds to K dwarfs as predicted by <cit.>. This range of stellar mass also agrees with previous detections and non-detections published in <cit.> for example. However, it is in contradiction with the detection of helium obtained for HD 209458 b by <cit.> or for HAT-P-32b by <cit.> for which the stellar masses are ∼1.2 M_⊙. We can also identify a range of XUV flux received by the planets that seems to favor the presence of metastable helium between 1 400 and 17 800 erg·s^-1 · cm^-2. Nethertheless, as discussed above, these XUV values are model-dependent and linked to the stellar ages, which is usually not well constrained. We see that on the one hand WASP-39 b and WASP-127 b receive less XUV flux while orbiting G-type stars, and are the oldest planets studied here. On the other hand AU Mic b, WASP-52 b, and WASP-80 b are the youngest planets and receive the highest amount of XUV flux. It is also interesting to note that WASP-52 b, for which we have a contradictory result with <cit.>, is well above the favored range of XUV flux range even with a well-constrained age, which is in contrast with its proximity to the acceptable range of stellar mass that seems to favor the presence of metastable helium.
The bottom panels of Fig. <ref> show the correlations for ṁ with the stellar mass and the XUV flux. The use of ṁ should be preferred to the δR_p/H as it is a more physical quantity to describe the thermosphere probed by the helium triplet. Indeed, the δR_p/H quantity assumes H and He particles in their neutral state only and at the equilibrium temperature of the planet, which is much lower than the thermospheric temperature. However, the correlations between ṁ with stellar mass and XUV flux are not well defined as upper limits on ṁ are in agreement with some of the derived ṁ for detections. This can be linked to the correlation reported in section <ref> between ṁ and T. We note that the best upper limits on ṁ are for WASP-11b and WASP-80b, while they have poorly constrained δR_p/H as opposed to AU Mic b, GJ 3470 b, WASP-39 b and WASP-52 b. This is surprising for WASP-11 b as all the planets within the same stellar mass and XUV flux range have clear detections and higher ṁ.
Interestingly, based on their gravitational potential all the planets studied here are expected to fall in the strong hydrodynamic wind regime (intermediate regime for HD 189733 b ) and thus undergo strong evaporation <cit.>. However, we do not observe signatures of the helium triplet for most of our targets. This discrepancy already reported by <cit.> or <cit.> could be the result of more complex mechanisms not integrated in 1D hydrodynamical codes. Although, a simpler explanation can be found in the population of the metastable helium triplet. The strength of those helium lines does not depend on the evaporation rate but on the mid-UV flux ionizing the metastable helium particles and on the EUV flux populating the triplet state through recombination. Based on the metastable helium population mechanisms, <cit.> suggested that planets orbiting K type stars and receiving the right balance of mid-UV and EUV flux are the most amenable to probe the evaporation through the helium triplet. To summarize, even if for the planets with non detection of the helium triplet strong evaporation can happen, but this process is not traced by the helium triplet as most of the helium particles are in the ground state.
§ SUMMARY AND CONCLUSION
This paper presents the first homogeneous analysis of the metastable helium triplet for eleven exoplanets observed with a single high-resolution near-infrared spectrograph, SPIRou. We confirmed detection of He triplets in the atmosphere of HAT-P-11 b, HD 189733 b and WASP-69 b. We obtained upper limits for GJ 3470 b and WASP-52 b, that disagree with previously published papers. We set new or confirm upper limits for AU Mic b, WASP-11 b, WASP-39 b, WASP-80 b, and WASP-127 b. We finally obtained an upper limit for GJ 1214 b, which is not constraining enough to settle the debate on the presence of helium.
We note that the SPIRou transmission spectra are affected by various systematics that are difficult to understand and to properly remove. We mitigated them by scaling our uncertainties but more robust approaches could be consider by combining Gaussian processes with model fitting algorithm (out of the scope of this paper). They can be caused by the low data quality (e.g., close to readout regime), instrumental effects, reduction pipeline errors, or stellar variability like in the case of AU Mic b. Nonetheless, we set 3-σ upper limits which are as representatives as possible of these systematics and assuming a given helium line width.
We estimated the impact of stellar-pseudo signal on the observed helium features with a simple toy model. We concluded that none of the detections could solely be explained by such an effect, but that could contribute to some of the variability observed between transits and instruments. A more complex model would be needed to take into account the complexity of stellar surfaces and their occultation by planets combined with the intrinsic variability of the stellar flux in various wavelength domains. To better understand the impact of stellar variability on the presence of metastable helium in exoplanet atmospheres, applying homogeneous analysis of multiple stellar and planetary tracers (e.g., Na, H-α, He) as presented in <cit.> will be necessary for the future. Instruments like CARMENES, GIANO simultaneously to HARPS-N, SPIRou simultaneously to ESPaDOnS (in a near future) and NIRPS simultaneously to HARPS will have an edge to disentangle multi-source effects. Among the three detections, HD 189733 b is probably the one most impacted by stellar variability and requires specific analysis to properly extract the true planetary signature. However, HAT-P-11 b emerges as the best candidate to search for temporal planetary variability signature as it is completely free of stellar contamination, and variability in the CARMENES data <cit.> still have to be explained.
The transmission spectra of the 11 planets were modeled with <cit.>. We excluded models at high temperatures and mass-loss rates to stay in a physical thermosphere assumption based on the gravitational potential of the planet and the maximum mass-loss efficiency for a photoionization-driven isothermal Parker wind <cit.>. We also fixed the radius of the thermosphere to the Roche Lobe for the non-detections to derive reasonable constraints. However, we note that discussions on the criteria to set the radius are missing in the literature. Upper limits on the mass-loss rate were derived for all the non-detections with the exception of GJ 1214 b and WASP-127 b. In the case of the detections, we found a constant day-to-night side zonal wind for the three planets with hot thermospheres but a relatively low mass-loss rate, which is consistent with previous findings. While HD 189733 b is confirmed to have a shallow metastable helium atmosphere, HAT-P-11 b, and WASP-69 b tend to have a more extended thermosphere with possibly a small exosphere.
The correlation between δR_p/H and M_⋆ confirms that planets around K dwarfs are favored to have metastable helium in their atmosphere as proposed in <cit.>. The correlation between δR_p/H and XUV flux proposed and described in several studies also seems to remain a good indicator, but is much more model-dependent that with the stellar mass, and thus requires caution when comparing studies. We also point out that the EUV emission is linked to the coronal heavy element abundance and thus to the stellar metallicity <cit.>. The use of the ṁ instead of δR_p/H should be utilized more as it is physically more representative of exoplanet thermospheres, although it remains ambiguous to draw a strong conclusion at the population level for now. This could be improved by gathering higher quality datasets even for non-detections. Future studies and proposals could use as guidelines the δR_p/H versus M_⋆ correlation to build robust science cases as it is the less model-dependent correlation but one should not forget that studying planets outside this soft spot can turn out to be as important.
Finally, we want to draw attention to the necessity of building reproducible observations from the proposal phase taking into account weather losses to get robust results. This lack of reproducibility for helium studies is frequent and calls for more than one transit observation per target. In this context, the NIRPS consortium will observe over 5 years more than 75 gas-dominated planets with at least two transits for each of them as part of its Guaranteed Time Observations (GTO) program. It will, therefore, provide an extended sample to study exoplanet atmospheres as a population, unlocking constraints on the origin of the Neptunian desert and planet evolution.
Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated from the summit of Maunakea by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. The observations at the Canada-France-Hawaii Telescope were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site. We thank the anonymous referee for their comments that improved the overall quality of this work. R. A. is a Trottier Postdoctoral Fellow and acknowledges support from the Trottier Family Foundation. This work was supported in part through a grant from the Fonds de Recherche du Québec - Nature et Technologies (FRQNT). This work was funding by the Institut Trottier de Recherche sur les Exoplanètes (iREx). D. L., R. D., M. R. would like to acknowledge funding from the National Sciences and Research Council of Canada (NSERC). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (project Spice Dune, grant agreement No 947634; grant agreement No 730890, project SACCRED, grant agreement No 716155, project ASTROFLOW, grant agreement No 817540). J.D.T was supported for this work by NASA through the NASA Hubble Fellowship grant #HST-HF2-51495.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. J. F. D. acknowledges funding from the European Research Council (ERC) under the H2020 research & innovation programme (grant agreement #740651 NewWorlds). We acknowledge funding from the French ANR under contract number ANR18CE310019 (SPlaSH). A.C. and X.D. acknowledge funding from the Investissements d'Avenir program (ANR-15-IDEX-02), through the funding of the "Origin of Life" project of the Grenoble-Alpes University.
aa
§ MASTER OUT SPECTRA
§ TRANSMISSION SPECTROSCOPY MAP
§ RED NOISE CONTRIBUTION
§ BOOTSTRAP SIMULATION
|
http://arxiv.org/abs/2307.06077v1 | 20230712105827 | Group Fairness in Social Choice | [
"Tomáš Masařík",
"Grzegorz Pierczyński",
"Piotr Skowron"
] | cs.GT | [
"cs.GT",
"cs.MA"
] |
Group Fairness in Social Choice
Tomáš Masařík
University of Warsaw
mailto:[email protected]@mimuw.edu.pl
Grzegorz Pierczyński
University of Warsaw
mailto:[email protected]@mimuw.edu.pl
Piotr Skowron
University of Warsaw
mailto:[email protected]@mimuw.edu.pl
=============================================================================================================================================================================================================================================================================================================
We consider a voting model, where a number of candidates need to be selected subject to certain feasibility constraints. The model generalises committee elections (where there is a single constraint on the number of candidates that need to be selected), various elections with diversity constraints, the model of public decisions (where decisions needs to be taken on a number of independent issues), and the model of collective scheduling. A critical property of voting is that it should be fair—not only to individuals but also to groups of voters with similar opinions on the subject of the vote; in other words, the outcome of an election should proportionally reflect the voters' preferences.
We formulate axioms of proportionality in this general model. Our axioms do not require predefining groups of voters; to the contrary, we ensure that the opinion of every subset of voters whose preferences are cohesive-enough are taken into account to the extent that is proportional to the size of the subset. Our axioms are always satisfiable, and generalize the strongest known satisfiable axioms for the more specific models. We explain how to adapt two prominent committee election rules, Proportional Approval Voting (PAV) and Phragmén Sequential Rule, as well as the concept of stable-priceability to our general model. The two rules satisfy our proportionality axioms if and only if the feasibility constraints are matroids.
keywords=input, output, for, while, if, else, return, break,
comment=[l]//,
frame=single,
mathescape=true,
float,
captionpos=b,
numbers=left,
breaklines=true,
§ INTRODUCTION
We consider a general voting scenario, where a subset of candidates needs to be selected based on the voters' preferences. The generality of this model comes from the fact that we do not consider specific types of elections, but rather assume we are given feasibility constraints as a part of an election. The constraints encode the type of the election by specifying which subsets of candidates can be elected. For example, if the goal is to select a fixed numer of candidates, say k of them, then the constraints would simply indicate that all k-element subsets of the candidates are feasible. Naturally, the model allows to incorporate additional diversity constraints that specify lower and upper bounds on the number of selected candidates from different demographic groups.
Yet, the general feasibility constraints give much more flexibility and allow us to capture considerably more complex scenarios, which at first might seem not to be about selecting subsets of candidates. For example, consider the setting of public decisions where we need to make decisions on a number of independent issues <cit.>. For each alternative we can introduce a candidate and the feasibility constraints would indicate that exactly one alternative needs to be selected on each issue. Similarly, consider a model where the voters provide partial orders over the candidates and the goal is to establish a ranking of the candidates <cit.>. This can be also expressed in our model by introducing auxiliary candidates: for each pair of candidates, c_i and c_j, an auxiliary candidate c_i, j would indicate that c_i is ranked before c_j (either in the resulting ranking or in the voters' ballots). Our model also captures committee elections with negative votes <cit.> and judgement aggregation <cit.>—we explain this in more detail in <Ref>.
While the aforementioned types of elections might appear very different, certain common high-level principles apply to all of them.
In particular, in most scenarios it is of utmost importance to ensure that the outcomes of elections are fair—not only to individuals but also to groups of voters with similar views. Indeed, fair elections provide equal opportunities for underrepresented groups to engage in the process of decision-making, and lead to more inclusive and accountable decisions. Fairness has also a positive effect on participation and enhances the legitimacy of the elected candidates.[Fairness is also critically important in elections that do not involve humans. For example, proportional election rules are used for selecting validators in the blockchain protocol <cit.> (proportionality is important to provide resilience against coordinated attacks of malicious users) or for improving the quality of genetic algorithms <cit.>.] Accordingly, group-fairness in elections is the central focus of this paper.[While there is a large literature on individual fairness in social choice, it mainly deals with the problem of fair allocation of private goods <cit.>. In the context of voting, group-fairness is a more compelling concept. Indeed, the selected candidates are typically not assigned to particular individuals, but they rather represent (and are selected based on votes of) whole groups of voters <cit.>.]
There are at least three major ways in which one can implement group-fairness:
* One can additionally collect detailed socio-demographic information about the voters and use this information when computing winners of the election <cit.>. This approach has severe drawbacks—it elicits sensitive information, and requires an in-depth a priori knowledge on what part of personal data is important for guaranteeing fairness.
This is not the approach that we take in this paper. We will still guarantee, though, that each socio-demographic group has a proportional influence on the outcome, assuming that that the members of such groups vote similarly.
* One can specify certain concrete diversity constraints <cit.>, i.e., constraints that enforce that the outcomes of elections have certain structure, such as gender equality. Our model provides the possibility of adding such constraints. However, this is not sufficient, since the diversity constraints are typically fixed, and do not depend on the voters' preferences.
* Finally, we can ensure proportional aggregation of the voters' preferences. Here, the main idea is that each subset of voters who support similar candidates should be entitled to influence the decision to the extent proportional to the size of the subset. This approach has been pursued in recent years in the literature on committee elections <cit.>, and in this paper we extend it to more general types of elections.
Our main contribution is conceptual. We define several axioms of proportionality in the general model of elections with feasibility constraints. Our axioms differ in their strength, but they share the same intuitive explanation: if a group of voters has cohesive-enough preferences, then they should have the right to decide about a proportional part of the elected outcome. Our axioms generalize the strongest known properties from the literature on committee elections, namely fully justified representation (FJR) <cit.>, extended justified representation (EJR) <cit.>, and proportional justified representation (PJR) <cit.>. One of the main results of this paper says that our axioms are always satisfiable (<Ref>).
We further explain how to adapt two prominent committee election rules, Proportional Approval Voting (PAV) and Phragmén Sequential Rule, to our general model. We provide a full characterisation explaining that the two rules satisfy the aforementioned proportionality axioms if and only if the feasibility constraints are matroids (<Ref>). We also adapt the concept of stable-priceability to the model with general constraints, and we prove that the solutions that are stable-priceable satisfy our strong notions of proportionality. Altogether, our results provide basic tools that allow to guarantee group-fairness in different types of elections and in the presence of different types of constraints.
§.§.§ Related Work
We discuss most of the related work in the relevant parts of the paper. Here we mention yet another particularly pertinent line of research, even though it does not directly relate to our axioms. One of the strongest notion of group-fairness considered in the social-choice literature is the core <cit.>. Since for some types of elections the core might be empty, certain relaxations of the core are often considered, for example its approximate variants <cit.>. Among this literature the recent work of <cit.> is most closely related to our paper. This work alike considers the model with diversity constraints, yet the considered axioms are very different.
§ THE MODEL
For each natural number t∈ we set [t]={1, 2, …, t}, and we use the convention that [0] = ∅.
An election is a triple E=(C, N, ), where C={c_1, …, c_m} is a set of m candidates, N={1, 2, … n} is a set of n voters, and ⊆ 2^C is a nonempty family of feasibility sets. We say that a subset of candidates W⊆ C is feasible if W∈.
Each voter i ∈ N submits an approval ballot A_i ⊆ C, which is a subset of candidates that the voter supports. (In <Ref> we additionally discuss more expressive types of ballots.) Analogously, for a candidate c ∈ C by N(c) we denote the set of voters that approve c, N(c) = {i ∈ N c ∈ A_i}. We implicitly assume that the approval ballots are inherently associated with the voters, and thus are the part of the election. For each voter i we define her utility from a subset W ⊆ C as the number of candidates in W that the voter approves, that is u_i(W) = |A_i ∩ W|.
Intuitively, u_i(W) quantifies the satisfaction that voter i enjoys if the subset W is selected.
A selection rule is a function that given an election returns a nonempty set of feasible outcomes.[We are typically interested in a single outcome, yet we allow for ties.]
Without loss of generality, we assume that is closed under inclusion, i.e., if W ∈ and W' ⊆ W, then W' ∈. Indeed, if W' ∉, it could be completed to a feasible set, and the voters would enjoy at least as high utility from the completed set as from the original one.
§.§ Feasibility Constraints
Our framework generalizes several important models considered in the literature, in particular:
Committee elections. Here, we assume the goal is to select a subset of candidates (called a committee) of a given fixed size k. Thus, the feasibility constraints are of the following form:
= { W ⊆ C |W| = k}.
The model of committee elections has been extensively studied in the literature; we refer to the book by <cit.> and to the book chapter by <cit.>.
Public Decisions. Here, we assume that the set of candidates is divided into z disjoint pairs C = ⋃_r ∈ [z] C_r, |C_r| = 2 for each r ∈ [z] and C_r ∩ C_s = ∅ for all r, s ∈ [z] with r ≠ s. For each pair we must select a single candidate, thus, the feasibility constraints are given as:
= { W ⊂ C |W ∩ C_r| = 1 for each r ∈ [z]}.
Intuitively, each pair corresponds to an issue on which a binary decision needs to be made; one candidate in the pair corresponds to the “yes”-decision, and the other one to the “no”-decision. This model has been studied by <cit.>, <cit.>, and <cit.>. One particularly appealing application domain for this model is to support negotiations among groups of entities in order to establish a common policy (e.g., negotiations among political parties that want to form a governing coalition).
In this paper we additionally introduce an intermediate model that is more specific than the general model with arbitrary feasibility constraints, yet still more expressive than the models of committee elections and public decisions. This model is similar to the one of committee elections with diversity constraints, as studied by <cit.>, <cit.>, and <cit.>.
Committee elections with disjoint attributes. Here, we assume that the set of candidates is divided into z disjoint groups C = ⋃_r ∈ [z] C_r, C_r ∩ C_s = ∅ for r, s ∈ [z], r ≠ s. For each group r ∈ [z] we are given two numbers: a lower and an upper quota, denoted respectively as r and r. The goal is to select k candidate so that the number of candidates selected from each set C_r is between r and r.
= { W ⊆ C |W| = k and r≤ |W ∩ C_r| ≤r for each r ∈ [z]}.
All the types of feasibility constraints mentioned above are special cases of an even more general setting of the constraints with a matroid structure <cit.>. It is proved in <Ref>.
The feasibility constraints are matroid if the following condition, called the exchange property, is satisfied:[Formally, this means that the pair (C, ) forms a matroid.]
* For each X, Y ∈ such that |X| < |Y|, there exists c∈ Y∖ X such that X∪{c}∈.
Intuitively, in a matroid all the candidates carry the same weight in the constraints. If we can remove some two candidates to make space for some other candidate c, then it is sufficient to remove only one of these two candidates. This is formalised in the following lemma.
For all elections with matroid constraints, all sets W⊆ C, c∉ W and W'⊆ W (W'≠∅) such that W∖ W' ∪{c}∈ there exists c'∈ W' such that (W∖{c'}) ∪{c}∈.
Suppose that <ref> is satisfied and the statement is violated. Consider W, W' and c witnessing this violation. Assume that the witness is chosen so that |W'| is minimized (yet still |W'| > 1, as otherwise the only member of W' would be clearly the required candidate). Now let us define sets X, Y∈ as follows: X = W∖ W' ∪{c}, Y=W. Then by <ref> we have that there exists a∈ Y ∖ X = W' such that X∪{a}∈. From the observation that |W'| > 1, we have that |X| + 1 ≤ |Y|-1. Therefore, after removing a from W' we would obtain a smaller witness of the violation of the lemma statement, a contradiction.
While a large part of our results concerns matroids, our definitions also apply to computational social choice models that do not have a matroid structure. Below we give a few examples of such models that fit our general framework.
Ranking candidates. Assume the goal is to find a ranking of the candidates <cit.> instead of simply picking a subset. The ranking should reflect the preferences of the voters expressed over the individual candidates. This setting can be represented in our general model as follows. For each pair of candidates, c_1 and c_2, we introduce an auxiliary candidate c_1, 2. Intuitively, selecting c_1, 2 would correspond to putting c_1 before c_2 in the returned ranking. The feasibility constraints would ensure that the selected auxiliary candidates correspond to a transitive, asymmetric, and complete relation on original candidates.[It remains to specify the voters' preferences over the auxiliary candidates. The most natural way is to construct an approval-based preference profile, and to assume that a voter approves c_1, 2 if she prefers candidate c_1 over c_2 in the original preference profile. This approach would be compatible with preference profiles consisting of weak partial orders.] The model also applies to collective scheduling <cit.>—the candidates would correspond to jobs to be scheduled, and the constraints allow to incorporate additional dependencies between the jobs.
Committee elections with negative votes. Consider a model where the voters are allowed to express negative feelings towards candidates <cit.>. This scenario can be modelled by introducing auxiliary candidates and adding appropriate constraints. For each candidate c we can add a virtual candidate c̅ which corresponds to not-selecting c. A voter approves c̅ if she voted against c in the original election. The feasibility constraints would ensure that we never select c and c̅ together.
Judgment Aggregation. Here the goal is to find a valuation of propositional variables that satisfies a given set of propositional formulas <cit.>. The valuation should take into account the opinions of the voters with respect to which of the variables should be set true, and which of them should be set false. We can represent this setting by adding, for each propositional variable x, two candidates, c_x, T and c_x, F, corresponding to setting the variable to true and to false, respectively. The propositional formulas can be incorporated as feasibility constraints.
In <Ref> we give examples showing that the aforementioned constraints are not matroids.
Our model is also closely related to voting in combinatorial domains <cit.>.
§ THE MAIN DEFINITION OF PROPORTIONALITY
In this section, we first formulate the main definition that capture the idea of group fairness. Next, we show a few structural properties of the proposed definition. In particular, we explain that for the specific models that we discussed in <Ref> our definition implies the well-known notion of extended justified representation (EJR) <cit.>. Let us start by introducing our main axiom.
Given an election E = (C, N, ) we say that a group of voters S ⊆ N deserves ℓ candidates if for each feasible set T ∈ either there exists X ⊆⋂_i∈ S A_i with |X| ≥ℓ such that T ∪ X ∈, or the following inequality holds:
|S|/n > ℓ/|T| + ℓ.
We say that a feasible outcome W ∈ of an election E = (C, N, ) satisfies extended justified representation (EJR) if for each ℓ∈ and each group of voters S⊆ N that deserves ℓ candidates there exists a voter i ∈ S that approves at least ℓ candidates in W, i.e., u_i(W) ≥ℓ. ⌟
Note that for |S| ≠ n condition (<ref>) can be equivalently written a
|S|/n - |S| > ℓ/|T|.
The latter formulation might look a bit more intuitive, but we will mainly use the former one, since it does not require considering the case of division by zero separately.
Let us intuitively explain <Ref>. Consider a group of voters S ⊆ N and let us have a closer look at the condition saying that this group deserves ℓ candidates. Why giving ℓ candidates to S can be possibly wrong?
The main reason is it may prohibit us from selecting candidates that are supported by other voters, namely the voters from N ∖ S. Consider a set T that is supported by those from N ∖ S. If T ∪ X ∈ then giving ℓ candidates to S does not prohibit selecting T; we can safely give ℓ candidates to S while satisfying the claim of the remaining voters. If T ∪ X ∉ then we are in (<ref>); for the sake of this explanation consider the (almost equivalent) formulation given in (<ref>). This formulation reads as follow: proportionally to its size the claim of the group S to ℓ candidates is stronger than the claim of the remaining voters to set T. Thus, such set T cannot be used as an evidence discouraging us from giving ℓ candidates to S.
Yet another equivalent formulation of the condition in <Ref> is the following. A group of voters S ⊆ N deserves ℓ candidates if for each feasible set T ∈ with
|T| ≤ℓ·n-|S|/|S|
there exists X ⊆⋂_i∈ S A_i with |X| ≥ℓ such that T ∪ X ∈. This condition intuitively reads as follows: S deserves ℓ candidates if they can complete each reasonable suggestion of the other voters, T, with ℓ commonly approved candidates.
We note that in <Ref> we might w.l.o.g. assume that X ∈. Indeed, if T ∪ X ∈, then in particular X ∈ since is closed under inclusion. ⌟
If a group of voters S deserves ℓ candidates then in particular, there must exist a feasible set X ⊆⋂_i∈ S A_i with |X| ≥ℓ. This follows from the observation that for an empty set T = ∅ condition (<ref>) is never satisfied. ⌟
Let us now illustrate our definition through a couple of examples. This will also provide intuition on why our definition generalises the analogous definitions in the more specific models.
Consider the model of committee elections with approval utilities. Assume the goal is to select a subset of k = 10 candidates, and consider a group S consisting of 30% of voters who jointly approve three candidates, c_1, c_2, and c_3.
Indeed, consider a set T ⊆ C, and observe that X = {c_1, c_2, c_3} always satisfies the conditions from <Ref>. If |T| ≤ 7 then T ∪ X ∈. Otherwise,
3/|T| + 3≥3/8 + 3 < 3/10 = |S|/n.
Thus, EJR implies that some voters from S must approve at least three out of ten selected candidates.⌟
The same reasoning can be used to formally prove that <Ref> generalises the classic definition of EJR from the literature on committee elections <cit.>. Our definition also corresponds to the definition of proportionality for cohesive groups in the model of public decisions (Definition 7 in <cit.>). Finally, our definition implies the axiom of strong EJR by <cit.> in the context of sequential decision making <cit.> (a weaker variant of strong EJR is strong PJR; this axiom has been also considered by <cit.>, but they used the name “some periods intersection PJR”).
Let us consider yet another example.
Consider the model of committee elections with disjoint attributes.
Assume that z = 2 and so C = C_1 ∪ C_2. Assume that our goal is to select exactly 10 candidates from C_1 and exactly 20 candidates from C_2. Thus, 1 = 1 = 10, 2 = 2 = 20, and k = 30. Assume further that there are enough candidates in each set, e.g., |C_1| = |C_2| = 100.
Let S consists of 41% of all the voters, who jointly approve some 11 candidates from C_2. These voters deserve 8 candidates. Indeed, let X be a set of 8 candidates jointly approved by S. If |T| < 13 then T ∪ X ∈; otherwise:
8/|T| + 8≤8/13 + 8 = 8/21 < |S|/n.
Now assume that S additionally approves 4 candidates from C_1. Then S deserves 10 candidates. Indeed, consider two cases. If |T ∩ C_1| ≤ 6 then we can add to X four candidates from C_1 without violating the constraints. In order to prevent adding 6 candidates from C_2, it must hold that |T ∪ C_2| ≥ 15. But then:
10/|T| + 10≤10/25 < |S|/n.
On the other hand, if |T ∩ C_1| > 6 then we observe that it also must hold that |T ∩ C_2| > 11 (as otherwise we could add to X ten candidates from C_2), and so |T| ≥ 18. Thus, also in this case the condition (<ref>) from <Ref> holds. ⌟
Recently, <cit.> have considered the model of committee elections with constraints. While they mainly focused on the notion of the core, they also proposed yet another variant of EJR that applies to the model with constraints. Our work has been done independently, yet it is important to note that their definition is considerably weaker than ours. In particular, consider the following example. Assume our goal is to select a committee of size k so that it consists of k/2 men and k/2 women. Assume there is a group of 50% of voters who jointly approve some k/2 men. According to our definition such a group deserves k/4 candidates in the elected committee. The definition of <cit.> provides no guarantees for such a group. We prove in <Ref> that our definition is strictly more general.
The main feature which makes our definitions powerful is that they are always satisfiable, independently of the specific types of constraints or voters' preferences.
For each election there exists an outcome satisfying extended justified representation.
The proof of <Ref> follows from a more general result, namely from <Ref> in <Ref>.
Finally, note that EJR also implies that the average utility of the voters from the group S is considerably high. This is already known in the context of committee elections <cit.>, and below we generalize this result to matroid constraints.
Consider an election with matroid feasibility constraints, and let W be an outcome satisfying EJR. Then, for each group of voters S deserving ℓ candidates, the average number of candidates from W that the voters from S approve is at least:
1/|S|∑_i ∈ S |A_i ∩ W| ≥ℓ-1/2.
Consider a group of voters S ⊆ N that deserves ℓ candidates.
Consider a group S' ⊆ S with |S'| ≥ |S| - i ·|S|/ℓ for some natural number i ∈ [ℓ]. We will first show that S' deserves ℓ - i candidates. Fix a feasible subset of candidates T ∈. Let us remove i arbitrary candidates from T, and denote the so-obtained subset as T'; if |T| < i, then we simply set T' = ∅. Let us consider two cases. First, assume that there exists X ⊆⋂_i ∈ S A_i of size ℓ such that X ∪ T' ∈. Let p = |T' ∩⋂_i ∈ S A_i|. Clearly:
|X ∪ T'| - |T| ≥ℓ + |T'| - p - |T| ≥ℓ - p + |T| - i - |T| = ℓ - p - i .
Then, by the exchange property <ref> applied to X ∪ T' and T, we get that we can add ℓ - p - i candidates from X to T and the so obtained set would be feasible. Consequently, there exist a set X' of size ℓ - i such that X' ∪ T ∈ and so the condition from <Ref> is satisfied for S'.
Second, assume that for each X ⊆⋂_i ∈ S A_i of size ℓ we have X ∪ T' ∉. Then, since S deserves ℓ candidates, we get that:
|S|/n > ℓ/|T'| + ℓ.
This, in particular means that T' ≠∅ and so |T'| = |T| - i. Consequently:
|S'|/n≥|S| /n·(1 - i/ℓ) > ℓ/|T'| + ℓ·ℓ - i/ℓ = ℓ - i/|T'| + ℓ = ℓ - i/|T| + ℓ - i.
This again shows that the condition from <Ref> is satisfied for S'.
Thus, by EJR we know that there must exist a voter v_1 who approves at least ℓ candidates in the outcome W. We can apply EJR to S ∖{v_1} and infer that there exists a voter v_2 with a given number of approved candidates in W, and so on. Altogether, we get that at least one voter approves ℓ candidates at least ⌊|S|/ℓ⌋ voters approve ℓ - 1 candidates, and so on. Thus:
1/|S|∑_i ∈ S |A_i ∩ W| ≥1/|S|·(ℓ + ∑_j = 1^ℓ-1⌊|S|/ℓ· j ⌋) ≥1/|S|·(∑_j = 1^ℓ-1|S|/ℓ· j )
= 1/ℓ·∑_j = 1^ℓ-1j = 1/ℓ·(ℓ-1)ℓ/2 = ℓ-1/2.
This completes the proof.
§ PROPORTIONAL APPROVAL VOTING
In this section we consider a natural extension of Proportional Approval Voting (PAV) to our model with general constraints. We characterise the structure of feasibility constraints for which PAV satisfies our notion of proportionality. Thus, our characterisation precisely identifies the elections for which it is appropriate to use Proportional Approval Voting.
Given an election E = (C, N, ), we define the PAV score of an outcome W ⊆ C as:
_(W) = ∑_i ∈ N(̋|W ∩ A_i|), where H(k) = ∑_j = 1^k 1/j.
Proportional Approval Voting (PAV) selects a feasible outcome with the maximal PAV score. ⌟
PAV has excellent properties pertaining to proportionality in the model of committee elections <cit.> and public decision <cit.>. We will now prove that PAV exhibits good properties also when applied to the model with more general constraints—precisely, that PAV satisfies EJR if and only if the feasibility constraints have a matroid structure.
PAV satisfies EJR for all elections with matroid constraints. For each non-matroid feasibility constraints there is an election where PAV fails EJR.
First we prove that if the election has matroid constraints, then PAV satisfies EJR.
Suppose that the statement does not hold for some election E. Let W be an outcome returned by PAV for E. Let S⊆ N be a set of voters deserving ℓ candidates, such that every voter in S gets at most ℓ-1 representatives in W.
Let us denote by A_S the set ⋂_i∈ S A_i. Further, let W' denote a subset of candidates c such that there exists a candidate a_c∈ A_S ∖ W such that (W∖{c})∪{a_c}∈.
Intuitively, for each candidate c∈ W' we can swap c with a_c, and after such a swap the outcome will still be feasible. Of course, for each candidate c∈ W∖ W' we can also swap c with herself (such a swap does not change the outcome).
Let us denote by Δ(c, c') the change in the PAV score obtained by swapping some c∈ W with some c'∈ C. We know that for each such pair of candidates we have Δ(c, c') ≤ 0 (as otherwise W would not be an outcome maximizing PAV score). Let us estimate the following expression:
∑_c∈ W'Δ(c, a_c) + ∑_c∉ W'Δ(c, c) .
Swapping a pair of candidates can be viewed as a two step process, where first we remove one candidate from W and then we add one.
Let us first estimate the sum of decreases of the PAV score due to removing candidates. For each voter having x > 0 representatives in W, we can subtract x times the score of 1/x. Let us denote by S_0⊆ S the subset of voters from S who have no representatives in W. Hence, the total decrease may be equal to n-|S_0| at most.
Now after additions of new candidates to the committee, the PAV score increases in total by at least:
∑_i∈ S∖ S_0(|W'∖ A_i|/|W∩ A_i|+1 + |W∩ A_i|/|W∩ A_i|) + ∑_i∈ S_0|W'∖ A_i|/|W∩ A_i|+1
≥∑_i∈ S(|W'∖ A_i|/|W∩ A_i|+1 + 1) - |S_0|
= ∑_i∈ S|W'∖ A_i| + |W∩ A_i|+1/|W∩ A_i|+1 - |S_0|
≥∑_i∈ S|W'| + |(W∖ W')∩ A_i|+1/ℓ - |S_0|
≥∑_i∈ S|W'| + |(W∖ W')∩ A_S|+1/ℓ - |S_0|
≥ |S|·|W'∖ A_S| + |W∩ A_S|+1/ℓ - |S_0|.
Intuitively, for each voter i∈ S, if the removed candidate was from W∩ A_i, we add the (|W∩ A_i|)th candidate supported by her, otherwise we add the (|W∩ A_i|+1)th candidate supported by her.
Finally, we have that:
|S|·|W'∖ A_S| + |W∩ A_S|+1/ℓ - |S_0| - (n - |S_0|) ≤ 0.
|S|/n≤ℓ/|W'∖ A_S| + |W∩ A_S|+1.
Consider now set T obtained in the following way:
* first, we set T = W',
* second, we remove from T all the candidates from W'∩ A_S and some arbitrary additional ℓ-1-|W∩ A_S| candidates.
We will prove that S cannot propose a subset X⊆ A_S of size ℓ such that T∪ X∈. Indeed, otherwise from <ref> applied to sets T and T∪ X we would have that there exists a candidate c∈ X such that W'∪{c}∈. But then from <Ref> applied to W, W∖ W' and c, we get that there exists a candidate c'∈ W∖ W' such that W∖{c'}∪{c}∈. But from the definition of set W' there need to hold that c'∈ W', a contradiction.
Since we have proved that group S cannot propose any committee X⊆ A_S of size ℓ such that T∪ X∈, it needs to hold:
|S|/n > ℓ/|T| + ℓ.
From the construction of set T, we know that:
|T| = |W'| - |W'∩ A_S| - (ℓ - 1 - |W ∩ A_S|)
= |W'∖ A_S| + |W∩ A_S| - ℓ + 1 .
Therefore:
|S|/n > ℓ/|W'∖ A_S| + |W∩ A_S|+1.
Joining (<ref>) and (<ref>) we obtain a contradiction, which completes the first part of the proof.
Now we prove that if the feasibility constraints are not a matroid, then PAV violates EJR. Suppose that for given constraints, there exist nonempty sets X, Y∈ such that |X| < |Y| and X∪{c}∉ for all c∈ Y∖ X. Then we have that X⊈ Y and |Y∖ X| ≥ 2. Among all such witnesses, consider a one that first minimize |X| and second minimize |Y∖ X|. Denote by ℓ = |X|.
Consider now the following construction. Let n be a multiple of 3· (ℓ+1). We have a group S_1 of ℓ/ℓ+1· n + 1 voters approving exactly X and a group S_2 of the remaining n/ℓ+1 - 1 voters approving exactly Y∖ X.
Let us first show that S_1 deserves ℓ candidates. Indeed, for T=∅ this group can propose the feasible set X of size ℓ they jointly approve. On the other hand, if |T| ≥ 1, then we have:
|S_1|/n = ℓ/ℓ+1 + 1/n > ℓ/ℓ+1≥ℓ/ℓ+|T|.
Since S_1 deserves ℓ candidates, EJR may be satisfied only if all the candidates from X are elected. Then, directly from our assumptions, no candidate from Y ∖ X can be elected.
Now consider set X with one candidate c'∈ X removed. Then there exists c_1∈ Y∖ X such that (X∖{c'})∪{c_1}∈, since otherwise (X∖{c'}, Y) would be a smaller witness. Similarly, we conclude that there exists also c_2 ∈ Y∖ X, c_2 ≠ c_1 such that (X∖{c'})∪{c_1, c_2}∈, since otherwise ((X∖{c'})∪{c_1}, Y∖{c_1}) would be a smaller witness.
Note that all the candidates outside of X ∪ Y contribute 0 PAV points to the final score, hence if EJR is satisfied, the PAV score of the winning outcome cannot be higher than the score of outcome X. However, the score of outcome (X∖{c'})∪{c_1, c_2} is higher—compared to X, group S_1 loses one ℓth candidate (namely c'), but group S_2 gains two representatives instead. Hence:
_((X∖{c'})∪{c_1, c_2}) - _(X) ≥ (1+1/2) · |S_2| - |S_1|/ℓ
= 3/2·n/ℓ+1 - 3/2 - n/ℓ+1 + 1/ℓ
> 1/2·n/ℓ+1 - 3/2≥ 0
The last inequality comes from the assumption that n ≥ 3· (ℓ+1). Hence, X is not elected by PAV and EJR is violated. This completes the second part of the proof.
Let us now interpret <Ref>. Intuitively, it says that PAV gives strong proportionality guarantees if all the candidates cary the same weight in the feasibility constraints. An example type of elections where PAV fails EJR is participatory budgeting <cit.>, where different candidates might have different costs, and so some candidates can exploit the feasibility constraints to a higher extent than the others. We will discuss in more detail such types of constraints in <Ref>.
§ PRICEABLE OUTCOMES
In this section we take a different approach to designing proportional selection rules. It is based on the idea of priceability <cit.>, which can be intuitively described as follows: the voters are initially endowed with some fixed amount of virtual money; this money can be spent only on buying the candidates. The voters prefer to buy such candidates for which they are asked to pay least; typically these are candidates who have higher support, as for such candidates more voters are willing to participate in a purchase. The outcome consists of the purchased candidates. This approach has already proved useful in the design of selection rules for committee elections and participatory budgeting <cit.>. We start by describing a rule that implements this idea in a sequential manner.
§.§ Phragmén's Sequential Method
In this section we define a natural extension of the Phragmén's Sequential Method <cit.> (a method known from the setting of committee elections) to the model with general constraints.
We start with an empty outcome W = ∅. The price for each candidate c is 1 dollar; this cost needs to be covered by the supporters of c if c is selected. Voters earn money continuously at a constant speed (say, 1 dollar per second). At each time moment, when a group of supporters of some candidate c has 1 dollar in total, c is purchased: we set W W ∪{c} and reset the budgets of the voters from N(c) to 0. After that, we remove from the election all the candidates c' such that W∪{c'}∉ and continue the procedure until all the candidates are either purchased or removed.
While the definition assumes that time and money are continuous, the exact moments of purchasing next candidates can be computed in polynomial time. It is known that in the context of committee elections the Phragmén's sequential method fails EJR <cit.>, but nevertheless it has very good properties pertaining to proportionality <cit.>. In particular, it satisfies the axiom of Proportional Justified Representation (PJR) <cit.>, a weaker variant of EJR. We will show that the method preserves its good properties as long as the constraints have a matroid structure.
We start by generalizing the axiom of Proportional Justified Representation to the case of general constraints.
We say that a feasible outcome W ∈ of an election E = (C, N, ) satisfies proportional justified representation (PJR) if for each ℓ∈ and each group of voters S⊆ N that deserves ℓ candidates (according to <Ref>) there are at least ℓ candidates from ⋃_i∈ S A_i in W.
We will now prove that Phragmén's Sequential Method satisfies PJR if and only if the feasibility constraints have a matroid structure.
Phragmén's Sequential Method satisfies PJR for elections with matroid constraints. For each non-matroid feasibility constraints there is an election where the method fails PJR.
We start by proving the first part of the theorem statement. Consider a group of voters S⊆ N deserving ℓ candidates. For the sake of contradiction, assume the method selects fewer than ℓ candidates from ⋃_i∈ SA_i. Consider the first moment, t, when all the candidates from ⋂_i∈ S A_i are either elected or removed. Note that t ≤ℓ/|S|. Indeed, if t > ℓ/|S|, then at time ℓ/|S| the group S would collect in total ℓ dollars, and would buy at least ℓ candidates from ⋃_i∈ S A_i (the possibility of buying such candidates comes from the fact that there would always be a candidate from ⋂_i∈ S A_i available for purchase).
Let W denote an outcome purchased at time t. The voters from N ∖ S could spend at most (n-|S|)·ℓ/|S| = n·ℓ/|S|-ℓ dollars on candidates from set T = W∖⋃_i∈ S A_i. In particular, as the price for all the candidates is 1, it means that:
|T| ≤n·ℓ/|S|-ℓ.
Now we need to consider two cases. First, suppose that there is no set X⊆⋂_i∈ S A_i of size ℓ such that T ∪ X ∈. Then, as S deserves ℓ candidates, the following inequality holds:
|S|/n > ℓ/|T| + ℓ≥ℓ/n·ℓ/|S|-ℓ + ℓ = |S|/n,
a contradiction. Hence, there exists set X⊆⋂_i∈ S A_i of size ℓ such that T ∪ X ∈.
Since we assumed that |W ∩⋃_i∈ S A_i| < ℓ, we infer that W has strictly smaller size than T∪ X. Now we can apply the exchange property <ref> to sets W and T∪ X to obtain that there exists a candidate c∈ X ∖ W such that W ∪{c}∈. But then we have that c∈⋂_i∈ S A_i and c was neither elected (since c ∉ W) nor removed (since W ∪{c}∈). We obtain a contradiction.
Now we prove that if the feasibility constraints are not matroid constraints, then Phragmén's Sequential Method violates PJR. Suppose that for given constraints, there exist nonempty sets X, Y∈ such that |X| < |Y| and X∪{c}∉ for all c∈ Y∖ X. Then it holds that X⊈ Y and so |Y∖ X| ≥ 2. Denote by ℓ = |Y|.
Consider the following construction. All the voters approve candidates from X. Additionally, we have a group S of ℓ/ℓ+1· n + 1 voters, each of whom additionally approves Y.
Let us first show that S deserves ℓ candidates. Indeed, for T=∅ this group can propose the feasible set Y of size ℓ they jointly approve. Otherwise, if |T| ≥ 1:
|S|/n = ℓ/ℓ+1 + 1/n > ℓ/ℓ+1≥ℓ/ℓ+|T|.
Note that Phragmén's Sequential Method first elects all the unanimous candidates, namely X. However, after that no candidates from Y∖ X can be elected. Hence, S would get only at most |X| < ℓ representatives, which completes the proof.
While Phragmén's Sequential Method does not satisfy EJR, it still guarantees that the voters from a group deserving ℓ candidates have high utility on average—in fact as high as it would be implied by EJR (cf. <Ref>). This result also holds for the general class of matroid constraints. We prove it by combining the ideas from <Ref> and from the work of <cit.>. This result, together with the fact that the Phragmén's method can be computed in polynomial time, makes the rule particularly appealing and practical.
<Ref> generalises the recent result by <cit.>, proved in the context of sequential decision making.
Let W be the outcome returned by Phragmén's Sequential Method for an election with matroid constraints. For each group of voters S that deserves ℓ candidates we have:
1/|S|∑_i ∈ S |A_i ∩ W| ≥ℓ-1/2.
Consider a group of voters S⊆ N deserving ℓ candidates. Towards a contradiction assume that Inequality (<ref>) does not hold.
We first define the time t as follows:
t = ℓ/|S| + Δ - 1/n,
where Δ is the smallest non-negative value such that at t the voters from S have at most Δ unspent dollars (if such Δ does not exist, then we simply set Δ = 0). There are two possibilities:
* Either at t there was a purchase γ such that before the purchase the voters from S had at least Δ unspent dollars, and after the purchase they had at most Δ unspent dollars, or
* at t the voters from S had at least Δ unspent dollars.
The analysis in both cases is the same, thus without loss of generality let us assume that we are in the first case.
We will first prove that at t, before the purchase γ there was a candidate from ⋂_i∈ S A_i that was neither elected nor removed.
At t the voters collected in total tn dollars, and they have spent at most tn - Δ. Hence, they have bought the set W of at most t n - Δ candidates.
From W we remove in total ℓ - 1 candidates, and let us call the remaining set T; if W contained fewer than ℓ-1 candidate, then we simply set T = ∅. We can remove these ℓ - 1 candidates in such a way that T ∩⋂_i∈ S A_i = ∅ (this follows from the fact that the Inequality (<ref>) is not satisfied, and so there are fewer than ℓ-1 candidates in W ∩⋂_i∈ S A_i).
If there exists X ⊆⋂_i∈ S A_i such that X ∪ T ∈, then by the exchange property <ref> applied to X ∪ T and W we infer that there must exist a candidate c ∈⋂_i∈ S A_i such that W ∪{c}∈ (which is exactly what we wanted to prove). Otherwise, since S deserves ℓ candidates we get that:
|S|/n > ℓ/|T| + ℓ.
In particular, this means that T ≠∅ and so T = |W| - ℓ + 1. Consequently, we get that:
|S|/n > ℓ/|W| + 1.
From that we get that
|W| > n ·(ℓ/|S| - 1/n) = nt - Δ.
This leads to a contradiction.
Thus, at time t there is a candidate from ⋂_i∈ S A_i available for purchase. We can now use exactly the same reasoning as the one provided in the work of <cit.>. There, using an argument involving potential functions, it was implicitly proved that at each time moment, as long as some candidate from ⋂_i∈ S A_i can be purchased, the voters from S pay on average at most 2/|S| per approved candidate. Thus, the average payment per approved candidate until time t was no greater than 2/|S|.
At time t the voters from S spent at least t|S| -Δ dollars in total. Let us assess this value:
t|S| -Δ = (ℓ/|S| + Δ - 1/n) |S| - Δ = ℓ + |S|/n· (Δ - 1) - Δ≥ℓ - 1 .
The last inequality follows from the fact that Δ≤ 1 (otherwise, they money held by S would be used earlier to buy a candidate ⋂_i∈ S A_i).
Consequently, the voters from S approve on average at least the following number of candidates:
1/|S|·ℓ-1/2/|S| = ℓ-1/2.
This completes the proof.
So far our results applied to matroid constraints only. Interestingly, if the constraints are non-matroid, then Phragmén's Sequential Method still can be successfully applied—in the most general case it provides approximate proportionality guarantees (cf. <Ref>).
§.§ Stable Priceable Outcomes
In the previous section we have described the sequential process of buying candidates. Another approach would be to define the outcomes as an equilibrium in a certain market. This idea has been proposed by <cit.>, who introduced the concept of stable priceability, inspired by the classic concept of Lindahl's equilibrium <cit.>. In this section we adapt the concept to the setting with general constraints. It requires introducing a few additional elements that relate candidate prices and feasibility constraints.
Let us recall the definition of stable priceability <cit.>. Let π_c denote the price of a candidate c. Given a voter i ∈ N the payment function p_i C → specifies how much the voter pays for the particular candidates. We require that p_i(c) ≥ 0 for each c ∈ C and that ∑_c ∈ C p_i(c) = 1; intuitively, this means each voter has the total budget of one unit. We say that an outcome W is stable-priceable if there exists a collection of candidate prices {π_c}_c ∈ C and payment functions {p_i}_i ∈ N such that the following conditions hold:
* The voters pay only for the selected candidates, i.e. p_i(c) = 0 for each i ∈ N and c ∉ W.
* The total payment for each selected candidate c ∈ W must equal its price, ∑_i ∈ Np_i(c) = π_c.
* For each not-selected candidate c ∉ W we have:
∑_i ∈ N(c)max(r_i, max_c ∈ W p_i(c)) ≤π_c where r_i = 1 - ∑_c ∈ Wp_i(c).
This condition can be intuitively explained as follows. Each voter is primarily interested in buying as many approved candidates as possible. Secondarily, the voter is interested in spending as little money as possible. Thus, each voter is willing to pay for c either her all remaining money r_i or to stop paying for some already selected candidate c' and to pay the same or a lower amount for c instead. If the supporters of candidate c can pay its price this way, then the payment functions are not stable.
* The outcome W maximizes the total price:
W ∈_W' ∈∑_c ∈ W'π_c.
The last condition is new to this paper, and it corresponds to the concept of producer-stability from the economic literature on markets with public goods <cit.>. In the original definition of <cit.> the prices for all the candidates were required to be equal and hence, instead of the last condition, only exhaustiveness was required. In <Ref> we show that some of our results also hold if we replace condition <ref> with the condition of exhaustiveness.
Stable-priceable outcomes might not exist, but if they do, they have very good fairness-related properties.
For elections with matroid constraints all stable-priceable outcomes satisfy EJR.
For elections with general constraints each stable-priceable outcome such that all candidates c have a common price π_c = π satisfies EJR.
Let W be a stable-priceable outcome. Towards a contradiction assume that there is a group of voters S who deserve ℓ candidates, but each of them approves fewer than ℓ members of W.
We will first prove that there exists a not-selected candidate a ∈⋂_i ∈ S A_i ∖ W, the price of which satisfies the following inequality:
π_a < |S|/ℓ.
Let us start with the case of matroid constraints. We first consider the set W' of candidates from W that can be feasibly-exchanged with the candidates from ⋂_i ∈ S A_i ∖ W, that is:
W' = {c ∈ W there exists c'∈⋂_i ∈ S A_i ∖ W such that W∖{c}∪{c'}∈}.
Since the feasibility constraints have a matroid structure, from <Ref> we know that for each c'∈⋂_i ∈ S A_i ∖ W it holds that W' ∪{c'}∉. Next, from W' we remove all the candidates in ⋂_i ∈ S A_i. Clearly, we removed at most ℓ-1 candidates; if we removed strictly less than (ℓ - 1) candidates, then we additionally remove some arbitrary candidates so that we removed in total (ℓ-1) candidates. Let us denote the resulting set as T. Note that for each X ⊆⋂_i ∈ S A_i with |X| = ℓ it holds that T ∪ X ∉. This follows directly from the exchange property <ref> applied to X ∪ T and W'. Since S deserves ℓ candidates we get that:
|S|/n > ℓ/|T| + ℓ = ℓ/|W'| + 1.
After reformulating, we get that:
|W'| > n ·ℓ/|S| - 1.
Let a be the cheapest candidate in ⋂_i ∈ S A_i ∖ W.
Condition <ref> in the definition of stable-priceability implies that for each candidate c ∈ W' we have π_c ≥π_a. Consequently:
∑_c ∈ W'π_c > π_a ·(n ·ℓ/|S| - 1).
Since ∑_c ∈ W'π_c ≤ n - ∑_i ∈ Sr_i (this follows from condition <ref>) we get that:
n - ∑_i ∈ Sr_i > π_a ·(n ·ℓ/|S| - 1) .
By condition <ref> we also get that
∑_i ∈ Sr_i ≤π_a .
By combining the two above inequalities we get that:
n > π_a n ·ℓ/|S|.
This is equivalent to:
π_a < |S|/ℓ.
Now, consider the case where the feasibility constraints are arbitrary, but the all the prices are the same, that is for all candidates c we have π_c = π. Now, we proceed as follows.
From W we remove all the candidates from ⋂_i ∈ S A_i and some arbitrary additional candidates so that in total we removed ℓ-1 candidates. Let us denote the resulting set by T.
Condition <ref> in the definition of stable-priceability implies that for any set X ⊆⋂_i ∈ S A_i with |X| = ℓ, T ∪ X ∉. Since S deserves ℓ candidates we get that:
|S|/n > ℓ/|T| + ℓ = ℓ/|W| + 1.
After reformulating, we get that:
|W| > n ·ℓ/|S| - 1.
Since π W ≤ n - ∑_i ∈ Sr_i (this follows from condition <ref>) we get that:
n - ∑_i ∈ Sr_i > π n ·ℓ/|S| - π.
The remaining part of the proof follows exactly the same way as in the case of matroid constraints. Thus, in either case, we get that there is a candidate a ∈⋂_i ∈ S A_i ∖ W such that:
π_a < |S|/ℓ.
Since each voter from i ∈ S approves strictly fewer candidates than ℓ in W we infer that:
max(r_i, max_c ∈ W p_i(c)) ≥1/ℓ.
Thus, from condition <ref> in the definition of stable-priceability we get that:
1/ℓ· |S| ≤π.
This gives a contradiction and completes the proof.
The condition for stable-priceability can be easily written as an Integer Linear Program with the number of integer variables bounded by the number of candidates. Further, the ILP can be naturally relaxed so that it finds outcomes that are “closest to” stable-priceable. This makes the approach applicable to elections of moderate size.
§ EXTENSIONS OF THE MODEL
In this section we consider two extensions of our model. We first discuss the case where the preferences of the voters are expressed as general monotone set functions. Second, we discuss certain limitation of our concepts in the case when the candidates cary different weights in the feasibility constraints; we explain how to adapt our concepts to this case.
§.§ General Monotone Utility Functions
In this section, we formulate a stronger version of <Ref> that still is satisfiable. This definition applies much beyond approval ballots. Here, we assume that for each voter i we have a utility function u_i 2^C → that for each subset of candidates returns a real value. Intuitively, u_i(W) quantifies the level of satisfaction of voter i provided W is selected. We only assume that u_i is monotone, that is for all X ⊆ Y ⊆ C it holds that u_i(Y) ≥ u_i(X).
Now, we can formulate the axiom of fully justified representation, which generalizes the respective axiom from the literature on committee elections <cit.>.
Given an election E = (C, N, ) we say that a group of voters S ⊆ N is (α, β)-cohesive, α,β≥ 0, if for each feasible set T ∈ there exists X ⊆ C with |X| = α such that u_i(X) ≥β for each i ∈ S and that at least one of the following two conditions is satisfied:
T ∪ X ∈ or |S|/n > α/|T| + α.
We say that a feasible outcome W ∈ of an election E = (C, N, ) satisfies fully justified representation (FJR) if for each α, β∈ and each (α, β)-cohesive group of voters S⊆ N there exists a voter i ∈ S such that u_i(W) ≥β. A selection rule satisfies FJR if for each election E it returns outcomes satisfying FJR. ⌟
<Ref> is strictly stronger than <Ref>: indeed we obtain the definition of EJR if we additionally require that α = β. Intuitively, in the definition of FJR a group of voters S deserves the utility of β if for each T they can find a set X of (not too large) size α on which they agree that it has the value of at least β. In the definition of EJR, the voters from S must have a stronger agreement; they all must unanimously support every candidate from X.
This definition is still satisfiable.
For each election with monotone utilities there exists an outcome satisfying fully justified representation.
Given an election E=(C, N, ) we first define the procedure of partitioning voters. In each round r we search for the largest value β_r ≥ 0 for which there exists an (α_r, β_r)-cohesive group, α_r ≥ 0; if there are ties, we first prefer a cohesive group with a smaller value of α_r. We pick one such a group, call it S_r, and remove the voters from S_r from further consideration. We repeat the procedure until all the voters are removed (note that every non-empty group of voters is (0, 0)-cohesive, and so the procedure will stop). Thus, we partitioned the set of voters into disjoint sets S_1, S_2, …, S_p-1, S_p.
We will show now that for each group of voters S_r, r ∈ [p], we can select a set of candidates W_r with |W_r| ≤α_r such that
[(1)]
* each set W_r gives the voters from S_r the utility of at least β_r (that is u_i(W_r) ≥β_r for each i ∈ S_r), and
* the set W_1 ∪ W_2 ∪…∪ W_p is feasible.
We first fix the number of voters n. We will show the above statement by the induction on the number of active voters; the voter is active if it assigns a positive utility to some subset of candidates. Clearly, if all the voters are inactive then the inductive hypothesis holds, which is witnessed by an empty subset of candidates. Now, assume that the hypothesis holds if the number of active voters is strictly lower than n'. We will show that it holds also for n'.
Consider any set S_r∈{S_1, …, S_p} and consider a modified election E' in which we replace each voter from S_r with an inactive voter. Note that except for S_r the partitioning algorithm would return the same groups S_1, S_2, …, S_r-1, S_r+1, …, S_p as for E.
From our inductive assumption, there exist sets W_1 ∪ W_2 ∪ W_r-1∪…∪ W_r+1∪…∪ W_p such that all voters from S_i get at least the utility of β_i from W_i.
For T=W', since S_r is (α_r, β_r)-cohesive, we know that there exists X∈ with |X| = α_r that gives S_r the utility of β_r such that X ∪ T ∈ or
|S_r|/n > α_r/|T| + α_r.
If X ∪ T ∈ then we set W_r = X; we additionally give empty sets to inactive voters, and we are done. Otherwise, we have that:
|S_r|/n > α_r/|T| + α_r≥α_r/α_1 + … + α_p.
We repeat this reasoning for each r ∈ [p], and get that:
1 = |S_1| + … + |S_p|/n > α_1 + … + α_p/α_1 + … + α_p = 1,
a contradiction. Hence, there exists a sequence W_1 ∪…∪ W_p that satisfies our condition.
Now, consider an election E=(C, N, ), and let W_1, W_2, …, W_p be constructed as above. We take W = W_1 ∪ W_2 ∪…∪ W_p.
It remains to prove that W satisfies EJR. For the sake of contradiction, assume that there exists an (α, β)-cohesive group S such that for each voter i ∈ S we have u_i(W) < β. Consider the first step in the process of partitioning the voters, when some voter i ∈ S was deleted. It was deleted as a part of some (α_r, β_r)-cohesive group S_r. Since during the partitioning we always pick the group with the highest β_r first, we know that β_r ≥β. From the construction of W we know that u_i(W) ≥ u_i(W_r) ≥β_r ≥β. This proves a contradiction, and completes the proof.
Now let us investigate the relation between FJR and stable priceability. In the committee setting, stable priceability implies the core <cit.> which is a stronger axiom than FJR. However, <Ref> shows that it is no longer the case in our general model, even if we assume matroid constraints.
Stable Priceability with different prices does not imply FJR for approval committee elections with disjoint attributes.
Consider an instance of approval committee elections with disjoint attributes, E = (C, N, ) such that C=C_1 ⊔ C_2 (candidates are split into two separate groups) and |C_1| ≥ 41, |C_2| ≥ 50. Feasible sets contain at most 40 candidates from the first group and at most 40 from the second group. There are three disjoint groups of voters: group V_1 of 3n/5 approving all the candidates from C_1, group V_2 of n/3 of voters approving all the candidates from C_2, and group S of n/15 voters approving some 5 candidates A={a_1, …, a_5} from C_1. Besides, half of the voters from S approve some 5 candidates B={b_1, …, b_5} from C_2 and the other half of the voters approve different 5 candidates E={e_1, …, e_5} from C_2.
Consider now an outcome W containing 4 candidates from A∖{a_5}, 36 other candidates from C_1∖{a_5}, and 40 candidates from C_2∖ B ∖ E.. We will first show that this outcome is stable priceable. Let the price for the candidates from C_1 be π_1=n/60 and the price for the candidates from C_2 be π_2=n/120. It is clear that with such prices, outcome W (and every other outcome with 40 candidates from C_1 and 40 candidates from C_2) satisfies <ref>.
Voters from S spend all their money on candidates from A∖{a_5} (indeed: 4·n/60=n/15=|S|, the remaining voters spend their money on their approved candidates from W in any way so that each elected candidate is paid her price (it is possible, since 36·n/60 = 3n/5 = |V_1| and 40·n/120=n/3=|V_2|). Now we can see that W is stable: indeed, voters from neither V_1 nor V_2 have no possibility to improve their satisfaction from the committee. Voters from S do not have enough money to buy the fifth candidate from A. Besides, even after resigning from paying for A∖{a_5}, they do not have enough money to improve their satisfaction by buying candidates from B∪ E.
We will now show that in any committee satisfying FJR, some member of group S should get at least 5 representatives.
Consider any committee T∈. If T contains less than 36 candidates from C_1 then clearly group S can propose a committee X=A satisfying X∪ T∈. If T contains some 35+x candidates from C_1 for x∈ [5] but less than 41-2x candidates from C_2 then voters from S can propose a committee X containing 5-x candidates from A, x candidates from B and x candidates from E satisfying X∪ T∈. Consider now a committee T containing some 35+x candidates from C_1 (for x∈ [5]) and at least 41-2x candidates from C_2 (hence |T| ≥ 76-x ≥ 71). Now consider X=A. We have that:
|X|/|T|+|X|· n ≤5/71+5· n < n/15 = |S|,
which shows that S indeed should get 5 representatives in any committee satisfying FJR which completes the proof.
§.§ Weighted Candidates
It is worth noting that our definitions and the analysis so far implicitly assumed that all the candidates are treated equally, irrespectively of their impact on the feasibility constraints. Specifically, in Inequality (<ref>) in <Ref> we were only concerned with the number of candidates in the set T; however, some of these candidates can restrict the feasible solutions much more than the others. The classic model where this is the case is the one of participatory budgeting (PB) <cit.>—there, the candidates have weights, and there is a single constraint specifying that the total weight of the selected candidates is lower than or equal to the given budget value. In such cases it might be justified to include the weights of the candidates in the definitions of the axioms, as it is done in the work of <cit.>.
In this section we are considering the following addition to the original model. For each candidate c ∈ C assume we are given a weight (c) ∈_+; for a subset of candidates W ⊆ C we let w(W) = ∑_c ∈ W(c). Intuitively, the weights of the candidates should in some way correspond to the feasibility constraints, however this is not formally required. In this case, we write the definition of Extended Justified Representation as follows.
Given an election E = (C, N, ) we say that a group of voters S ⊆ N is (α, β)-strongly cohesive, α,β≥ 0, if for each feasible set T ∈ there exists X ⊆⋂_i ∈ S A_i with (X) ≤α and |X| ≥β such that at least one of the following two conditions is satisfied:
T ∪ X ∈ or |S|/n > α/(T) + α.
We say that a feasible outcome W ∈ of an election E = (C, N, ) satisfies extended justified representation (FJR) if for each α, β∈ and each (α, β)-strongly cohesive group of voters S⊆ N there exists a voter i ∈ S such that |W ∩ A_i| ≥β.
We analogously extend the definitions of fully justified representation (FJR) and proportional justified representation (PJR) to the case of weighted candidates. We will also say that a group of voters S deserves β candidates if this group is (α, β)-strongly cohesive for some α.
It is known that in the model with weights Proportional Approval Voting (PAV) fails EJR and PJR, even in approximation <cit.>. The approach based on priceability, on the other hand, provides more positive results. For Phragmén's Sequential Method it suffices to assume that the costs of the candidates that need to be paid by the voters equal to their weights. We will show that while, such defined rule fails all our axioms (PJR, and consequently EJR, and FJR), it provides certain approximate guaratees. The method provides particularly strong guarantees to small cohesive groups. For instance, a cohesive group of the size of 10% of the population, is guaranteed roughly 0.9 of PJR. It is worth noting that the approximation result works also for non-matroid constraints.
For weighted candidates Phragmén's Sequential Method selects an outcome W such that for each a group of voters S ⊆ N deserving β candidates we have:
|W ∩(⋃_i∈ S A_i) | ≥⌊β·n- |S|/n⌋.
Consider an (α, β)-cohesive group of voters S⊆ N. Towards a contradiction, assume that Phragmén's Sequential Method selects fewer than ⌊β·n- |S|/n⌋ candidates from ⋃_i∈ SA_i. Note that during the execution of the Phragmén's method, when a candidate c ∈⋂_i∈ SA_i is not removed nor selected, then the voters from S will pay no more than (c) in total or any candidate (as otherwise they would prefer to buy c). Let t be the first moment when at least one candidate has been removed from each subset A ⊆⋂_i∈ SA_i with |A| ≥β and (A) ≤α. Note that t ≤n- |S|/n·α/|S|. Indeed, if t > n- |S|/n·α/|S|, then at time n- |S|/n·α/|S| the group S would collect in total α·n- |S|/n dollars. Further, at this time moment there would be a set A ⊆⋂_i∈ SA_i with |A| ≥β and (A) ≤α such that all candidates from A would be either bought, or available for being bought. Thus, the voters from S would buy at least ⌊β·n- |S|/n⌋ candidates.
Let W denote an outcome purchased at time t. Since the voters could spend at most nt dollars on candidates from W, we get that:
(W) ≤ n·α/|S|·n- |S|/n.
Since S is (α, β)-cohesive, the following inequality holds:
|S|/n > α/(W) + α≥α/n·α/|S|·n- |S|/n+ α = |S|/n,
a contradiction. This completes the proof.
For weighted candidates Phragmén's Sequential Method may fail PJR.
Consider the following election. The candidates are divided into 100 disjoint groups, C_1, C_2, …, C_100. In each group C_i there are four candidates: a_i with the cost equal to 2 + ϵ and b_i, c_i, d_i with the costs equal to 1, each. The feasibility constraints are the following: for each group C_i the total cost of the candidates selected from C_i cannot exceed 3. Thus, from each group C_i we can select either a_i or b_i, c_i, and d_i.
Consider a group S consisting of (50-ϵ)% voters. The voters from S all approve all b, c, and d-candidates. Additionally, all the voters (including those from S) approve all a-candidates. It is straightforward to check that Phragmén's Sequential Method will select a-candidates only; in total 100 candidates. However, we will show that the group S is (120, 120)-strongly cohesive.
Indeed, if T contains fewer than 60 a-candidates, then the group S can easily point 120 candidates that together with T make a feasible set. Otherwise, (T) > 122, and so for sufficiently small ϵ it holds that:
|S|/n > 120/(T) + 120 = 120/122 + 120.
Thus, the group S should approve in total at least 120 candidates. Hence, PJR is failed.
For stable-priceability a few more adaptations need to be made:
* The assumption that the prices of the candidates are all equal would now correspond to the assumption that the prices are proportional to the candidates' costs.
π_c/(c) = π_c'/(c').
* Further, condition <ref> might be too restrictive (especially if the prices of the candidates are fixed). Indeed, such condition could itself imply a unique outcome, independently of the voters' preferences. Thus, we propose to replace it with the condition of exhaustiveness: for each c ∉ W we have W ∪{c}∉.
Let W be a stable-priceable outcome for weighted candidates, assuming prices are proportional to the candidates' costs and the exhaustiveness of W. Then, for each a group of voters S ⊆ N deserving β candidates there exists a voter i ∈ S with:
|W ∩ A_i| ≥β·n- |S|/n.
Let W be a stable-priceable outcome. Consider an (α, β)-cohesive group of voters S⊆ N, and towards a contradiction, assume that each voter from S approves fewer than β·n- |S|/n candidates from W.
Since W is exhaustive, and since S is (α, β)-cohesive, we get that:
|S|/n > α/(W) + α.
After reformulating, we get that:
(W) > α·n - |S|/|S|.
Note that there exists c ∈ W such that:
(c)/π_c≥(W)/π_W.
Thus, we get that:
(c)/π_c > α/π_W·n - |S|/|S|≥α/n·n - |S|/|S|
The above inequality must also hold for each c' ∈ C. Further, since there exists X ⊆⋂_i ∈ S A_i with |X| ≥β, (X) ≤α, we get that:
α≥∑_c ∈ X(c) > ∑_c ∈ Xπ_c ·α/n·n - |S|/|S| = π_X ·α/n·n - |S|/|S|.
This is equivalent to:
π_X < n|S|/n - |S|.
Let z = |X ∩ W| and let π_z denote the total price of the candidates from X ∩ W paid by the voters from S.
From condition <ref> in the definition of stable-priceability we get that for each c ∈ X ∖ W we have:
π_c ≥|S| - π_z/β·n- |S|/n - z.
By summing the above inequality we get:
∑_c ∈ Xπ_c ≥ (β - z) |S| - π_z/β·n- |S|/n - z > β - z/β·n- |S|/n - z· |S| - π_z .
Consequently, we get that:
π_X ≥β - z/β·n- |S|/n - z· |S| ≥β/β·n- |S|/n· |S| = n|S|/n - |S|.
This gives a contradiction and completes the proof.
§ CONCLUSION
We have considered a general model of social choice, where the structure of output is given through feasibility constraints. The feasibility constraints allow to encode different types of elections (e.g., single-winner and multi-winner elections, participatory budgeting, judgment aggregation, etc.). We have proposed a new technique of extending classic notions of proportionality to the general model of social choice with feasibility constraints. This way we have defined the axioms of justified representation in the model with constraints. Our technique also allows to extend other notions of fairness such as the core (see <Ref>).
Our strongest notion of proportionality, fully justified representation, is always satisfiable, even for general monotone utility functions. We further show that natural adaptations of two committee election rules, Proportional Approval Voting and Phragmén's Sequential Method, satisfy strong notions of proportionality if and only if the feasibility constrains are matroids. Phragmén's Sequential Method additionally provides good approximation of some of our notions of fairness for non-matroid constraints. This makes the rule suitable for elections of different type and structure. We have also generalised the concept of stable-priceabiliy to the case of elections with constraints.
There are several pressing open questions. First, our work mainly focuses on approval ballots and corresponding dichotomous utility functions; many applications, however, require dealing with more generic utility functions. Specifically, we are interested in the following two questions:
[(1)]
* Can we meaningfully define Phragmén's Sequential Method for additive utility functions, so that the rule preserves its most compelling properties?
* Can we define Method of Equal Shares for elections with constraints? The answer to the second question seems challenging. The main difficulty lies in the fact that we do not know how to set the prices of the candidates. If they are set too high, then some groups of voters might not be able to afford to buy enough supported candidates. If we set them too low, then the groups might be left with money which cannot be used for buying candidates without breaking feasibility constrains.
It is further important to check how the considered rules perform on real and synthetic data.
The setting with weighted candidates also remains mostly unexplored. This setting is particularly important, since it models the increasingly popular process of participatory budgeting. Can we define an analogue of matroid constraints for weighted candidates? This seems plausible given that the exchange property seems to be naturally adaptable to weights. Can we design rules that satisfy the strong proportionality axioms for such constraints?
plainnat
§ COMPARISON WITH THE DEFINITION OF <CIT.>
In this section we prove that our definition of EJR is strictly more general than the definition of Restrained EJR given in the work of <cit.> (Definition 3.1):
We are given a set of feasible committees of size at most k. A committee W ∈ satisfied restrained-EJR if there is no constraint-feasible
blocking coalition S ⊆ N of voters. Such a blocking coalition with endowment k' = ⌊|S|/n k satisfies the following: For all k'-completable committees Ŵ⊆ W with |Ŵ| ≤ k - k', there exists W' with |W'| ≤ k' such that
* T = Ŵ∪ W' ∈, and
* For all i ∈ S we have that |⋂_i∈ S∩ T| ≥max_i ∈ S u_i(W) + 1. ⌟
Assume that a committee W does not satisfy Restrained EJR. Consider a blocking coalition S. Let ℓ = max_i ∈ S u_i(W) + 1. From the condition on the blocking coalition applied to Ŵ = ∅ we infer that ℓ≤ k'. We will prove that S according to our definition of EJR deserves ℓ candidates. Consider a set T ∈. If |T| > k - k', then we have:
ℓ/ℓ + |T|≤k'/k' + |T| < k'/k' + k - k' = k'/k≤|S|/n.
Thus, Condition (<ref>) from <Ref> is satisfied. If |T| ≤ k - k' then from the condition on the blocking coalition S applied to Ŵ = T we infer that there exists W' such that Ŵ∪ W' ∈ and |⋂_i∈ S∩ (Ŵ∪ W')| ≥ℓ. Thus, there exists X ⊆⋂_i∈ S with |X| ≥ℓ such that T ∪ X ∈. Thus, S deserves ℓ candidates. Consequently, according the fact that max_i ∈ S u_i(W) < ℓ shows that our definition of EJR is failed by W.
§ CORE IN THE GENERAL MODEL WITH CONSTRAINTS
Our approach also allows to extend other concepts of fairness to the model with constraints. Here we explain how to extend the concept of core.
Given an election E = (C, N, ) we say that a group of voters S ⊆ N is (α, β)-cohesive, α∈, β S →, if for each feasible set T ∈ there exists X ⊆ C with |X| = α such that u_i(X) ≥β(i) for each i ∈ S and that at least one of the following two conditions is satisfied:
T ∪ X ∈ or |S|/n > α/|T| + α.
We say that a feasible outcome W ∈ of an election E = (C, N, ) is in the core if for each α∈, β S →, and each (α, β)-cohesive group of voters S⊆ N there exists a voter i ∈ S such that u_i(W) ≥β(i). ⌟
Our definition of the core clearly implies the definition of FJR and corresponds to the definition of the core for committee election rules.
§ COMPUTATIONAL SOCIAL CHOICE MODELS AND MATROIDS
In this section we prove that the examples of matroid feasibility constraints provided in <Ref> satisfy <ref>, and the examples of non-matroid feasibility constraints violate it.
Committee elections. Consider two feasible sets X, Y such that |X| < |Y|. It is clear that |Y| ≤ k and for each c∈ Y ∖ X we have that |X∪{c}| ≤ k, hence by the definition of , X∪{c}∈, which shows that satisfies condition <ref>.
Public decisions. Consider two feasible sets X, Y such that |X| < |Y|. Then for at least one binary issue C_r (r∈ [z]), we have that X∩ C_r = ∅ and Y∩ C_r ≠∅ (hence, |Y∩ C_r| = 1). Then after adding the candidate from Y∩ C_r to X, X is still feasible, which shows that satisfies condition <ref>.
Committee elections with disjoint attributes. Consider two feasible sets X, Y such that |X| < |Y|. If for some attribute r we have that |X∩ C_r| < r and (Y∖ X)∩ C_r ≠∅, then after adding any candidate from the latter set to X, X is still feasible. Suppose now that it is not the case, i.e., (*) (Y∖ X)∩ C_r = ∅ for each r such that |X∩ C_r| < r. Naturally, from the construction of , we have also that |X∩ C_r| = |Y∩ C_r| for each r such that |X∩ C_r| = r. Therefore, since |X| < |Y|, there need to exist attribute r such that r≤ |X ∩ C_r| < |Y ∩ C_r| < r. Consider now any candidate c∈ (Y∖ X) ∩ C_r. The set X∪{c} does not violate any upper quotas and is still possible to be completed to a k-sized set so that all lower quotas are satisfied (because Y is possible to be completed in such a way, X∪{c} has at most the same size as Y, and from (*) the number of seats required to satisfy all the lower quotas is no greater in X∪{c} than in Y), hence it is feasible and <ref> is satisfied.
Ranking candidates. Suppose that we need to elect a ranking among 3 candidates c_1, c_2, c_3. Consider set X={c_1, 2, c_2, 3} and set Y={c_3, 2, c_2, 1, c_3, 1}. Then it is clear that no candidate from Y can be added to X so that X still represents a valid ranking, hence <ref> is violated.
Committee elections with negative votes. Consider set X containing some k real candidates c_1, c_2, …, c_k and a (k+1)-sized set Y={c̅_1, c_2, c_3, …, c_k, c_k+1}. Now Y∖ X = {c̅_1, c_k+1}. None of them can be added to X without breaking feasibility constraints—adding c̅_1 would mean that c_1 is both elected and unelected, and adding c_k+1 would mean that more than k candidates are elected. Hence, <ref> is violated.
Judgement aggregation. Consider two variables x and y. As described in <Ref>, we introduce four candidates c_x, T, c_x, F, c_y, T, c_y, F. Now suppose that we require that the following formula holds: x y. Consider set X={c_x, T} and set Y={c_x, F, c_y, T}. Then it is clear that no candidate from Y can be added to X so that X is still feasible, hence <ref> is violated.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.